编辑: Quick Summary 到目前为止:我使用分水岭算法,但我可能有阈值问题 . 它没有检测到更明亮的圆圈 .

新:快速径向对称变换方法并不完全适用(编辑6) .


我想检测不同大小的圆圈 . 用例是检测图像上的硬币并单独提取它们 . - >将单个硬币作为单个图像文件 .

为此,我使用了open-cv的Hough Circle变换:(https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html

import sys
import cv2 as cv
import numpy as np


def main(argv):
    ## [load]
    default_file =  "data/newcommon_1euro.jpg"
    filename = argv[0] if len(argv) > 0 else default_file

    # Loads an image
    src = cv.imread(filename, cv.IMREAD_COLOR)

    # Check if image is loaded fine
    if src is None:
        print ('Error opening image!')
        print ('Usage: hough_circle.py [image_name -- default ' + default_file + '] \n')
        return -1
    ## [load]

    ## [convert_to_gray]
    # Convert it to gray
    gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
    ## [convert_to_gray]

    ## [reduce_noise]
    # Reduce the noise to avoid false circle detection
    gray = cv.medianBlur(gray, 5)
    ## [reduce_noise]

    ## [houghcircles]
    rows = gray.shape[0]
    circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 8,
                           param1=100, param2=30,
                           minRadius=0, maxRadius=120)
    ## [houghcircles]

    ## [draw]
    if circles is not None:
        circles = np.uint16(np.around(circles))
        for i in circles[0, :]:
            center = (i[0], i[1])
            # circle center
            cv.circle(src, center, 1, (0, 100, 100), 3)
            # circle outline
            radius = i[2]
            cv.circle(src, center, radius, (255, 0, 255), 3)
    ## [draw]

    ## [display]
    cv.imshow("detected circles", src)
    cv.waitKey(0)
    ## [display]

    return 0

if __name__ == "__main__":
    main(sys.argv[1:])

我尝试了所有参数(行,param1,param2,minRadius和maxRadius)来优化结果 . 这对于一个特定图像非常有效,但是具有不同尺寸硬币的其他图像不起作用 .

示例:参数 circles = cv.HoughCircles(gray, cv.HOUGH_GRADIENT, 1, rows / 16, param1=100, param2=30, minRadius=0, maxRadius=120)
enter image description here

使用相同的参数:
enter image description here

改为 rows/8
enter image description here

我还尝试了另外两种方法:writing robust (color and size invariant) circle detection with opencv (based on Hough transform or other features)

消防员的方法导致了这个结果:
enter image description here

fraxel的方法也不起作用 .

对于第一种方法:这种情况发生在所有不同的尺寸以及最小和最大半径 . 我怎么能改变代码,以便硬币大小不重要或者它自己找到参数?

预先感谢您的任何帮助!

Edit:

我按照亚历山大·雷诺兹的建议尝试了 watershed algorithm 的Open-cv:https://docs.opencv.org/3.4/d3/db4/tutorial_py_watershed.html

import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt

img = cv.imread('data/P1190263.jpg')
gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY)
ret, thresh = cv.threshold(gray,0,255,cv.THRESH_BINARY_INV+cv.THRESH_OTSU)

# noise removal
kernel = np.ones((3,3),np.uint8)
opening = cv.morphologyEx(thresh,cv.MORPH_OPEN,kernel, iterations = 2)

# sure background area
sure_bg = cv.dilate(opening,kernel,iterations=3)

# Finding sure foreground area
dist_transform = cv.distanceTransform(opening,cv.DIST_L2,5)
ret, sure_fg = cv.threshold(dist_transform,0.7*dist_transform.max(),255,0)

# Finding unknown region
sure_fg = np.uint8(sure_fg)
unknown = cv.subtract(sure_bg,sure_fg)

# Marker labelling
ret, markers = cv.connectedComponents(sure_fg)

# Add one to all labels so that sure background is not 0, but 1
markers = markers+1

# Now, mark the region of unknown with zero
markers[unknown==255] = 0

markers = cv.watershed(img,markers)
img[markers == -1] = [255,0,0]

#Display:
cv.imshow("detected circles", img)
cv.waitKey(0)

它在open-cv网站的测试图像上运行良好:

enter image description here

但它对我自己的图像表现非常糟糕:
enter image description here

我真的没想到为什么它不能处理我的图像?

Edit 2:

建议我看一下中间图像 . 在我看来, thresh 看起来不太好 . 接下来, openingdist_transform 之间没有区别 . 相应的 sure_fg 显示检测到的图像 .

thresh:
thresh
开场:
opening
dist_transform:
dist_transform
sure_bg:
sure_bg
sure_fg:
sure_fg

Edit 3:

我尝试了所有我能找到的distanceTypes和maskSizes,但结果却完全相同(https://www.tutorialspoint.com/opencv/opencv_distance_transformation.htm

Edit 4:

此外,我试图改变(第一)阈值功能 . 我使用了不同的阈值而不是OTSU函数 . 最好的一个是160,但它远非好:

enter image description here

enter image description here

在教程中它看起来像这样:
enter image description here

似乎硬币在某种程度上太亮了,无法被这种算法检测到,但我不知道如何改进它?

Edit 5:

更改图像的整体对比度和亮度(使用cv.convertScaleAbs)并未改善结果 . 然而,增加对比度应该增加前景和背景之间的“差异”,至少在正常图像上 . 但它甚至变得更糟 . 相应的阈值图像没有改善(没有获得更多的白色像素) .

Edit 6: 我尝试了另一种方法, fast radial symmetry transform (从这里https://github.com/ceilab/frst_python

import cv2
import numpy as np


def gradx(img):
    img = img.astype('int')
    rows, cols = img.shape
    # Use hstack to add back in the columns that were dropped as zeros
    return np.hstack((np.zeros((rows, 1)), (img[:, 2:] - img[:, :-2]) /     2.0, np.zeros((rows, 1))))


def grady(img):
    img = img.astype('int')
    rows, cols = img.shape
    # Use vstack to add back the rows that were dropped as zeros
    return np.vstack((np.zeros((1, cols)), (img[2:, :] - img[:-2, :]) / 2.0, np.zeros((1, cols))))


# Performs fast radial symmetry transform
# img: input image, grayscale
# radii: integer value for radius size in pixels (n in the original     paper); also used to size gaussian kernel
# alpha: Strictness of symmetry transform (higher=more strict; 2 is good place to start)
# beta: gradient threshold parameter, float in [0,1]
# stdFactor: Standard deviation factor for gaussian kernel
# mode: BRIGHT, DARK, or BOTH
def frst(img, radii, alpha, beta, stdFactor, mode='BOTH'):
    mode = mode.upper()
    assert mode in ['BRIGHT', 'DARK', 'BOTH']
    dark = (mode == 'DARK' or mode == 'BOTH')
    bright = (mode == 'BRIGHT' or mode == 'BOTH')

    workingDims = tuple((e + 2 * radii) for e in img.shape)

    # Set up output and M and O working matrices
    output = np.zeros(img.shape, np.uint8)
    O_n = np.zeros(workingDims, np.int16)
    M_n = np.zeros(workingDims, np.int16)

    # Calculate gradients
    gx = gradx(img)
    gy = grady(img)

    # Find gradient vector magnitude
    gnorms = np.sqrt(np.add(np.multiply(gx, gx), np.multiply(gy, gy)))

    # Use beta to set threshold - speeds up transform significantly
    gthresh = np.amax(gnorms) * beta

    # Find x/y distance to affected pixels
    gpx = np.multiply(np.divide(gx, gnorms, out=np.zeros(gx.shape), where=gnorms != 0),         
    radii).round().astype(int);
    gpy = np.multiply(np.divide(gy, gnorms, out=np.zeros(gy.shape), where=gnorms != 0),     
    radii).round().astype(int);

    # Iterate over all pixels (w/ gradient above threshold)
    for coords, gnorm in np.ndenumerate(gnorms):
        if gnorm > gthresh:
            i, j = coords
            # Positively affected pixel
            if bright:
                ppve = (i + gpx[i, j], j + gpy[i, j])
                O_n[ppve] += 1
                M_n[ppve] += gnorm
            # Negatively affected pixel
            if dark:
                pnve = (i - gpx[i, j], j - gpy[i, j])
                O_n[pnve] -= 1
                M_n[pnve] -= gnorm

    # Abs and normalize O matrix
    O_n = np.abs(O_n)
    O_n = O_n / float(np.amax(O_n))

    # Normalize M matrix
    M_max = float(np.amax(np.abs(M_n)))
    M_n = M_n / M_max

    # Elementwise multiplication
    F_n = np.multiply(np.power(O_n, alpha), M_n)

    # Gaussian blur
    kSize = int(np.ceil(radii / 2))
    kSize = kSize + 1 if kSize % 2 == 0 else kSize

    S = cv2.GaussianBlur(F_n, (kSize, kSize), int(radii * stdFactor))

    return S


img = cv2.imread('data/P1190263.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

result = frst(gray, 60, 2, 0, 1, mode='BOTH')

cv2.imshow("detected circles", result)
cv2.waitKey(0)

enter image description here
我只得到这个近乎黑色的输出(它有一些非常深的灰色阴影) . 我不知道该改变什么,会感谢你的帮助!