1184

J. Opt. Soc. Am. A / Vol. 30, No. 6 / June 2013

Wu et al.

Fast and accurate circle detection using gradient-direction-based segmentation Jianping Wu,* Ke Chen, and Xiaohui Gao School of Computer Engineering, Suzhou Vocational University, Suzhou 215104, China *Corresponding author: [email protected] Received January 10, 2013; revised March 23, 2013; accepted April 24, 2013; posted April 26, 2013 (Doc. ID 182381); published May 20, 2013 We present what is to our knowledge the first-ever fitting-based circle detection algorithm, namely, the fast and accurate circle (FACILE) detection algorithm, based on gradient-direction-based edge clustering and direct least square fitting. Edges are segmented into sections based on gradient directions, and each section is validated separately; valid arcs are then fitted and further merged to extract more accurate circle information. We implemented the algorithm with the C++ language and compared it with four other algorithms. Testing on simulated data showed FACILE was far superior to the randomized Hough transform, standard Hough transform, and fast circle detection using gradient pair vectors with regard to processing speed and detection reliability. Testing on publicly available standard datasets showed FACILE outperformed robust and precise circular detection, a state-of-art arc detection method, by 35% with regard to recognition rate and is also a significant improvement over the latter in processing speed. © 2013 Optical Society of America OCIS codes: (040.1880) Detection; (150.1135) Algorithms. http://dx.doi.org/10.1364/JOSAA.30.001184

1. INTRODUCTION The issue of detecting circular features arises in many areas of image analysis and is of particular relevance in industrial applications such as automatic inspection of manufactured products and components, automatic target detection and tracking, and aided vectorization of drawings. Because of its ubiquity, circle detection is a well-established issue that has been extensively investigated. Unfortunately, to the best of our knowledge, it has not been perfectly solved as all of the circle detection algorithms published so far have been too slow, too error prone, or too limited in their respective application scopes. The earliest circle detection in digital images was carried out by the circular Hough transform (CHT) (Hough [1]) and derivate accumulative voting algorithms (Duda and Hart [2], Davies [3], Kierkegaard [4]). A typical Hough-based approach to the problem of circular object detection employs an edge detector and uses edge information to deduce center position and radius. Peak detection is then performed by averaging, filtering, and histogramming the transform space. CHT is suitable in detecting circles when edge pixels are sparsely distributed and the radius distribution of the circles is narrow. Because its requirement of large storage space, computational complexity, low processing speed, and inaccuracy in both extracted radius and center position information (see Atherton and Kerbyson [5]), the raw CHT’s application scope has its limits. Improvements to CHT were made to improve the detection rate of the algorithm or more commonly to shoot down computational complexity to boost its processing speed. The means to refine the CHT algorithm include utilization of edge orientation (Kimme et al. [6]), usage of a single accumulator space to multiple radii (Minor and Sklansky [7]), addition of phase information to code for radii (Atherton and 1084-7529/13/061184-09$15.00/0

Kerbyson [8]), implementation of convolution operators as Hough transform filter (Kerbyson and Atherson [9]). Nashashibi et al. [10] presented an algorithm combining CHT and gradient dispersion to detect round traffic signs for vehicle-driving assistance. In addition to CHT, other approaches have also been proposed to tackle circle detection. Dave [11,12] introduced fuzzy shell-clustering (FSC), which was superior to CHT in memory requirement and computational cost. Bezdek and Hathaway [13] suggested a modification to Dave [11] to reduce the computational burden arising from Newton’s method. Krishnapuram et al. [14] refined Dave [11] and Bezdek and Hathaway [13] further by getting rid of the nonlinear equations to enhance the computational performance. Schuster and Katsaggelos [15] presented a new circle-detection algorithm based on a weighted minimum mean square error estimator. Its speed is faster than the Hough transform, but its detection is limited to one single circle in a whole picture. Ceccarelli et al. [16] introduced a circle-detection algorithm based on correlation of the magnitude and direction of image gradients. Rad et al. [17] presented the fast circle detection (FCD) method based on gradient pair vectors, which was able to accurately and quickly detect circles. But it sets an undesirable prerequisite that circles be totally brighter or darker than their background. What is more, because two pixels of a contributing pair in FCD have to lie halfway around the circle from each other, circles that are occluded 50% or more are usually not detectable. More recently, the genetic algorithm (GA) (Yao [18], Ayala-Ramirez et al. [19]) was added into the pool of the solutions to circle detection. Like FSC, its processing speed, convergence, and accuracy are affected by the number of generations or iterations. Dasgupta et al. [20] applied an algorithm called bacterial foraging optimization algorithm (BFOA) to circle detection. The testing showed that © 2013 Optical Society of America

Wu et al.

Vol. 30, No. 6 / June 2013 / J. Opt. Soc. Am. A

BFOA’s processing speed was comparable to that of GA, though not as fast as CHT-based algorithms. One algorithm that has been more extensively investigated is randomized CHT (RCHT) (Xu et al. [21], Chen and Chung [22], and Chung and Huang [23]). RCHT was able to significantly reduce CHT’s computational cost and boost its processing speed. In principle, RCHT randomly selects a number of pixels from an image and then fits them to a parameterized curve. If the pixels fit within a tolerance they are added to an accumulator with a score. Once a specified number of pixel sets are selected the circles with the best score are selected from the accumulator and its parameters are used to represent a circle in the image. Because only a small number of pixels are involved in processing, RCHT requires much less storage and can be processed much faster than CHT. However, the gained speed by RCHT over CHT is achieved partly at the expense of detection reliability and accuracy. RCHT’s performance is decent when edge pixels are sparse; however, its detection reliability deteriorates significantly in cluttered scene images as it tends to raise intolerably high false alarms and be much more likely in the meantime to miss valid circles. Most recently, Lamiroy and Guebbas [24] presented a robust and precise circular-arc-detection algorithm based on random sample consensus minimization (Qgar–Lamiroy) which tried to match an algebraic circle formula on a set of discrete points retrieved from the skeleton and measuring the overall fitting error. The method performed better than other algorithms in recognition rate and won the Arc Segmentation Contest of International Workshop on Graphics Recognition organized by International Association of Pattern Recognition (GREC 2011) [25]; it represents the state of the art in circle/arc detection. In this paper, we present a novel and very simple circledetection algorithm taking advantage of gradient-directionbased edge segmentation. As shown in Fig. 1, the algorithm is composed of 10 steps with the first four steps devoted to edge extraction or preprocessing and the rest of steps tackling arc/circle detection and validation. The following sections describe the details of our algorithm. Section 2 introduces the preprocessing steps. Section 3 is focused on the circle detection. Section 4 describes the experimental results. Section 5 draws the conclusion.

Fig. 1. Step-by-step implementation of FACILE.

1185

2. PREPROCESSING In preprocessing, a color digital image is converted into a grayscale image, then a Gaussian smoothing filter is used to relieve the image noise. Finally, single-pixel-width edges are generated via Canny edge detection [26]. We mark every pixel with its gradient direction, where nonedge pixels are marked as 0 and valid edge pixels are marked between 1 and 180 based on their gradient angles. To differentiate it from a nonedge pixel, a valid edge pixel with gradient angle of 0° is always marked as 180. The result is an M  N matrix Matrixedge whose element values are ranged from 0 to 180, with 0 referring to nonedge pixels and 1–180 referring to valid edge pixels. Here M and N are the width and height of the original image. In implementation, one byte is allocated for each element of Matrixedge .

3. CIRCLE DETECTION AND VALIDATION A. Edge Segmentation Fitting is the fastest approach to deriving the circle information for participating edge pixels. The reason why it was never used in circle detection is because it performs accurately only if two conditions are met: most of pixels participating in fitting belong to the same circle; these pixels must also cover a sufficient span of the circle (for example 1∕6 of the circle). In cluttered scene images, circles are usually tangled with other lines or curves, and directly executing fitting may not be able to yield desirable result. So before fitting, it is essential to segment the edge pixels in a way in which for each circle at least one resultant edge pixel segment (arc section) meets the aforementioned conditions. Because an edge pixel’s gradient direction is nearly perpendicular to the edge’s tangent direction, we propose in this paper a simple method to separate arcs from their background. Our method is to classify the edge pixels into four nonexclusive categories based on gradient directions. The four angle categories are centered at 0°, 45°, 90°, and 135°, respectively, and have the same angular span of α°, or more specifically have span of (−α∕2, α∕2), (45° − α∕2, 45°  α∕2), (90° − α∕2, 90°  α∕2) and (135°− α∕2, 135°  α∕2), respectively. The segmentation results in four matrices with each matrix storing one category of segmentation result. To throw more light on the segmentation process, we single out category of (90° − α∕2, 90°  α∕2). Starting from Matrixedge , a binary matrix of the same dimension is created, and each of its elements is marked 1 or 0 based on whether the value of the corresponding element in Matrixedge is between 90° − α∕2 and 90°  α∕2. Figure 2 shows

Fig. 2. Edge segmentation.

1186

J. Opt. Soc. Am. A / Vol. 30, No. 6 / June 2013

the segmentation results of the edge containing one full circle tangled with three partial circles (α  90°). Figure 2(a) is the original gray-scale image. Figure 2(b) is the Canny edge distribution, in which all the edges from four different circles are tangled and it is impossible to yield satisfactory results by direct least square fitting (DLSF) on the edge cluster. Figures 2(c)–2(f) show the segmented edge pixels whose gradient directions fall between −45° and 45°, 0° and 90°, 45° and 135°, and 90° and 180°, respectively. Figure 2 clearly shows that the originally tangled circles (or arcs) are separated from each other after the segmentation, and so performing DLSF on each cluster of connected edge pixels becomes much more likely to yield accurate results. After this step, we have four binary matrices with each matrix coming from one of the four categories. B. Clustering of Edge Pixels into Regions of Interest For each of the above four matrices, we run the depth-first search to retrieve clusters of connected edge pixels, which are conveniently called regions of interest (ROI). In the case of a circle, an ROI generally refers to a segment of the circle (an arc). The information of an ROI includes the upper-left corner and lower-right corner coordinates for the ROI, its ID, and the number of edge pixels in it. After clustering, each edge pixel is tagged with its ROI ID. C. ROI Sifting Instead of directly performing DLSF on each of ROIs obtained in the previous step, we use the following approach to sift out most of the nonarc ROIs to boost the processing speed. Although DLSF is a noniterative process, it does require at least 7 multiplications and 11 additions for each edge pixel, and directly performing DLSF on all ROIs could still significantly affect the processing speed, especially in situations where edge pixels are densely populated as in cluttered scene images. So we proposed an ROI-sifting algorithm to decimate the noncircle-arc ROIs before fitting. ROI sifting contains two steps. First, an ROI that contains an insufficient number of pixels is instantly discarded; this way we remove small edge sections that are unlikely to generate meaningful information for a circle via DLSF. Second, an ROI (here it means an edge section) that has an inconsequential end-to-end bending angle is also removed. This way, we eliminate straight lines or quasi-straight lines that are unlikely to belong in any arc or circle. To calculate the bending angle, we first locate the two end points [set as A xA ; yA  and BxB ; yB ] and the midpoint [set as CxC ; yC ], as shown in Fig. 3. Obviously, ∠ ACB signifies the degree of bending of an ROI from end to end. We obtain ∠ ACB using the following formula: ∠ ACB  arccos xA − xC xB − xC   yA − yC yB − yC  × p : xA − xC 2  yA − yC 2 xB − xC 2  yB − yC 2  1 ∠ ACB lies between 0° and 180° and indicates the magnitude of the edge-section bending angle, with smaller ∠ ACB denoting more substantial bending. More specifically, ∠ ACB  180° means the edge section is a straight line, and ∠ ACB  90° means the edge section is a semicircle. In between is ∠ ACB  135° which means the edge section bends 90° from end to end, implying a quadrant section. Generally, there is a relationship

Wu et al.

Fig. 3. ROI sift via ∠ACB. A and B are the end points of the ROI and C is the midpoint of the ROI.

between bending angle and ∠ ACB, ∠Bending  360° − 2∠ ACB. In practice, it is required that a valid circular arc’s ∠ACB be less than a threshold T ∠ (say, for example, 150°), and ROIs with ∠ ACB ≥ T ∠ are regarded as noncircleedge segments and so can be removed. The value of T ∠ has a significant effect on both arc-detection rate and processing time. Higher T ∠ causes a smaller percentage of ROIs to be sifted out and leads to more ROIs being processed by relatively time-consuming DLSF, which eventually leads to increased processing time in circle detection. On the other hand, lower T ∠ improves the processing speed, but it also causes valid arcs with relatively small bending angles to be incorrectly dropped and eventually leads to a lower circledetection rate. In addition, the choice of T ∠ is also restricted by the segmentation. Generally, the segmentation described above cuts a full circles into arcs whose lengths are about 1∕4 of the full circle, which means that most of valid arcs have a ∠ ACB around 135°. So to prevent these arcs from being prematurely discarded, T ∠ has to be significantly larger than 135°. In addition, to eliminate most of the straight lines and quasi-straight lines, ∠ ACB has to be significantly smaller than 180°. Generally it makes sense to set T ∠ to 157.5°, the midpoint between 135° and 180°. What is more, to speed up the sifting process, we also can use cos2 T ∠ instead as a threshold in implementation. This way, a valid arc candidate ROI has to meet the following condition: xA − xC xB − xC   yA − yC yB − yC 2 < cos2 T ∠ : xA − xC 2  yA − yC 2 xB − xC 2  yB − yC 2  (2) Formula (2) does not involve square-root or cosine operations, and so its cost is minimal. D. Circle Fitting for ROIs Each of the remaining ROIs is regarded as a candidate circular arc. There are several ways to compute an arc’s radius and center position from an ROI. All methods starts from the following circle equation: x − xc 2  y − yc 2  R2 ;

(3)

where R is the radius of the arc, and xc and yc are the horizontal and vertical position of the arc’s center, respectively. A circle fitting is the most straightforward way to obtain circle information. Gander et al. [27] presented two types of least-square circle fitting, namely algebraic fitting (also known as DLSF) and geometrical fitting (iterative fitting, which is more accurate than DLSF, but also more timeconsuming; for more details, please see Gander et al. [27]). Another interesting fitting algorithm was an ellipse DLSF presented by Fitzgibbon et al. [28]. Figure 4 shows the fitting results for an arc spanning 1∕4 circle using the above three

Wu et al.

Vol. 30, No. 6 / June 2013 / J. Opt. Soc. Am. A

fitting approaches. The highlighted arc is the ROI used for fitting. The fitted ellipse comes from Fitzgibbon et al.’s [28] ellipse fitting, the outer circle comes from geometric leastsquare circle fitting, and the inner circle comes from DLSF fitting. From Fig. 4, it is obvious that Fitzgibbon’s ellipse fitting converges to a noncircle ellipse and so is unfit for circle detection; DLSF produces similar results to the geometric fitting but requires much less computation than iterative geometric fitting. After balancing the processing speed and accuracy, Kasa’s [29] DLSF algorithm is adopted, which costs 7 multiplications and 11 additions for each edge pixel and is so very fast and is generally very accurate for any arc that covers at least 1∕4 of a full circle. For more information, please refer to Umbech and Jones [30]. The following sections are devoted to the Kasa [29] fitting algorithm. Assuming that we have a set of n points x1 ; y1 ; x2 ; y2 ; …:xn ; yn  that fall on the circumference of a circle whose radius is R and whose center position is at xc ; yc , for any edge point x; y, the following equation stands:

1187

Fig. 4. Fitting results from the pink and red pixels based on three curve-fitting algorithms.

p 1. w2  h2 ∕r ≥ T ratio , where w and h are the width and height of the ROI, and r is the radius. Tratio is the ratio threshold. Its purpose is to remove quasi-straight lines. 2. std < T std , where std is the standard deviation as defined in Eq. (9) and T std is the threshold for this standard deviation. Its purpose is to remove nonsmooth edges. 3. A valid arc must have two subarcs split across the P its P line passing xm ; ym    xi ∕n; yi ∕n and xc ; yc  meeting a certain condition. We use following formula to define the confidence:

x − xc 2  y − yc 2  R2

(4)

x2  y2   ax  by  c  0;

(5)

   2jR1 − R2 j 2jC⃗ 1 − C⃗ 2 j Conf  1 − 1− ≥ T confidence ; (10) R1  R2 R1  R2

where a  −2xc , b  −2yc and c  x2c  y2c − R2 . If we set up a function n X f a; b; c  x2i  y2i  axi  byi  c2 ; (6)

where R1 and R2 are the radii and C⃗ 1 and C⃗ 2 are the central positions of two subarcs as shown in Fig. 5, respectively, and T confidence is the confidence threshold.

then we expect that the best choice of values for a, b, c should result in f a; b; c reaching the minimum, which requires that 8 P 2 P P P ∂f > xi yi  c xi  x3i  xi y2i   0 < ∂a  2a P xi  b P P P ∂f xi yi  b y2i  c yi  x2i yi  y3i   0 : 7 ∂b  2a > : ∂f  2a P x  b P y  P c  Px2  y2   0 i i i i ∂c

F. Arc Merging and Circle Information Refinement The segmentation usually causes duplicated detections of an arc or a circle. For example, a full circle is segmented into eight nonexclusive arcs and so is detected eight times. They need to be merged to remove the duplicates and in the meantime refine the radius and center-position information. From each valid arc [center xc ; yc  and radius R], we search the other edge pixels potentially belonging to the same

or

i1

Equation (7) yields the following results:

8 0P 3 0 P 2 P P 1, P P 1 xi  xi y2i xi yi P xi 2 xi 2 P xi yi 2P xi > > P P P > 2 2 > xc  [email protected] Px2i yi  y3i yi A [email protected] Pxi yi yi A > > P yi P yi > 2 2 > x y  y n > x y n i i i i < 0 P 2i P 0 P 2 P 1, P P 1 3 2 x  x y x x 2 x 2P xi ; 2 x i i i yi i i i i P P P P P > > 2 3 2 > yi A [email protected] Pxi yi yi A > yc  [email protected] Pxi yi Pxi yi  yi  > P yi > 2  y2  > x x n x y n > i i i i i >  p : P R   xi − xc 2  yi − yc 2 ∕n where xc ; yc  and R represent the center position and the radius of the circle, respectively. A standard deviation is defined below as an important parameter to indicate the smoothness of the arc: p  P jxi − xc 2  yi − yc 2 − R2 j2 ∕n : (9) std  2R Here the standard deviation tells how closely the edge pixels fall around the fitted circle. E. Identification of Valid Arcs After arc information has been obtained via DLSF, the following criteria are adopted to validate an arc:

8

circle by searching in an annular area lying between two concentric circles centered at xc ; yc . The inner circle has the radius R − ΔR and the outer circle has the radius R  ΔR. Because a circle edge pixel’s gradient direction is generally parallel to the radial direction, it makes sense to search for each edge pixel whose gradient direction is nearly parallel to the radial direction of the circle. Figure 6 illustrates this process. In this figure, Fig. 6(a) is the original one. Figure 6(b) is the Canny edges. Figure 6(c) shows a validated arc (red) and its fitted circle (green). Figure 6(d) is the edge pixels in the annular area surrounding the circumference of the fitted circle that meet the condition that their gradient directions are less than a certain threshold from the radial direction.

1188

J. Opt. Soc. Am. A / Vol. 30, No. 6 / June 2013

Fig. 5. Validation of an arc by splitting the arc into two subarcs across the straight line passing through both arcs’ gravity center xm ; ym  and circular center xc ; yc .

Figure 6(e) is a fuller arc. Finally, Fig. 6(f) shows the fitted circle from the arc in Fig. 6(e). G. Computational Cost Estimate for Circle Detection Circle detection’s computational cost comes primarily from two sources, the number of memory accesses for each pixel and the computation involved in each access. For the sake of convenience, it is reasonable to assume that 4% of image pixels are edge pixels, which is equivalent to 1∕5 of pixels along columns and rows being edge pixels. 1. The edge segmentation involves one memory access for each pixel. For nonedge pixels, there is no further cost beyond one comparison. For each edge pixel, four comparisons are needed to classify it into two of the four nonexclusive types. The total cost in this step is one memory access and 1.16 comparisons for each pixel. 2. For each of the four categories of edge pixels, clustering of edge pixels involves one memory access for each pixel and eight neighbor pixel searches and comparisons for each edge pixel. So this step involves four memory accesses and four comparisons for each pixel. For each edge pixel, there are 16 memory accesses because it appears in two categories. The total cost is 4  0.04 × 16  4.64 memory accesses for each pixel and 4.64 comparisons. 3. Cluster sifting’s cost is trivial because for each ROI, only three columns or rows are searched. If the ROI’s width is greater than its height, the leftmost column, middle column, and rightmost column are searched to retrieve its left, middle,

Fig. 6. Gradient-based arc merging and duplicate elimination.

Wu et al.

and right end pixels. Otherwise, we search its top row, middle row, and bottom row to retrieve its top, middle, and bottom pixels. If we assume that an average ROI contains 30 pixels, then this search only involves three or 1∕10 of the pixels. And there is one calculation of Eq. (2) for each ROI, so its cost can be averaged by 30 pixels. Overall, the computational cost is negligible in comparison with the previous steps. 4. The computational cost of valid arc identification is also negligible because most of the ROIs do not have sufficient bending angles and so are eliminated before getting to this step. 5. Arc-merging and circle-fitting refinement costs even less than valid arc identification. Overall, the total computation cost is approximately six memory accesses and comparisons for each pixel, which means it is not only very fast in processing but also very stable in processing speed.

4. EXPERIMENTAL EVALUATION OF THE ALGORITHM We compared our algorithm, called the fast and accurate circle (FACILE) detection algorithm, FCD [17], CHT [2], and RCHT [23] on simulated data via a desktop PC (2.99 GHZ Inter Core 2 Duo CPU, with 2 GB memory). In addition, we also compared FACILE with Qgar–Lamiroy [24,31,32] on standard datasets using Liu’s performance evaluation metrics [33,25]. In the testings, the FACILE algorithm’s four parameters introduced in previous sections were set at T ∠  157.5°, T ratio  1, T stdev  0.5, and T confidence  0.25, respectively. The following sections summarize our testing results. A. Experimental Results on Simulated Data We compared FACILE to FCD [17], CHT [2], and RCHT [23]. The simulated data was a 1000 × 160 pixel image shown in Fig. 7. To simulate circle detection under different occlusion conditions, three types of occlusions were used: a fan occlusion (first row), a translating occluding line (second row), and a circular occlusion (third row). There were a total of 60 circles in the image with 20 circles in each of the three rows. Each circle had a fixed 15-pixel radius centered at 50m  25; 55n  25, where m and n were 0-based column numbers and row numbers. 1. Processing Speed on Simulated Data The processing speed of circle detection is closely related to the radius range of the circles needing to be detected, with the speed becoming faster with a smaller radius range. For simplicity, we fixed radius minimum (lower bound) of detectable circles at 10 pixels and increased the upper bound from 10 to 100 pixels to observe each algorithm’s processing time behavior. As shown in Fig. 8, FACILE’s processing time changed from 2.3 to 6.2 ms when the radius upper bound increased from 10 pixels (equal to the lower bound) to 15 pixels. After that the time remained stable, around 6.2 ms, when the radius upper bound increased from 15 to 100 pixels. In contrast, CHT’s processing time increased exponentially from 5.9 to 3670 ms, a scale-up of about 600 times, in the same period. The processing time for FCD increased approximately linearly from 56.7 to 152.8 ms. As for RCHT, its processing time distribution fluctuated around 300 ms with the increase of the

Wu et al.

Vol. 30, No. 6 / June 2013 / J. Opt. Soc. Am. A

1189

Fig. 7. Simulated dataset used to compare the performance of FACILE with other circle-detection algorithms.

Fig. 8. Processing time on simulated data for the four circle-detection algorithms. Here the radius lower bound is fixed at 10 pixels.

radius upper bound, the reason for which, we think, could well come from its random selection of pixel sets as seeds. Based on the experiment, we summarized our observations as follows: 1. Overall, FACILE was by far the fastest algorithm to detect circles, particularly when the detected radius range was wide. It was at least three times as fast as any other algorithm and at most 600 times as fast as CHT when the detection circle radius range expanded to 100 pixels. It performed consistently at least 10 times faster than FCD and RCHT. Most importantly, its processing speed was almost unaffected by the change of radius ranges of the circles. 2. CHT seemed to be more suitable than FCD and RCHT for detecting circles of narrowly distributed radii, but its speed downgraded much faster than latter two when detected circles’ radius range widened. 3. FCD’s processing time increased more than 2.7 times when the detection radius range changed from (10, 10) to (10, 100). It took at least 10 times as much time as FACILE to process an image. 4. RCHT’s processing speed was slower than that of FCD but much faster than that of CHT when the detection radius range got wider. 2. Detection Error on Simulated Data Detection errors come from two sources, false negative errors and false positive errors, where false negative refers to failure to detect a valid circle and false positive refers to erroneous confirmation of a noncircle object. Detection error depends on the validation criteria set by a circle-detection algorithm. In the experiment, we used the accumulation factor as a threshold to decide whether a valid circle existed. Here accumulation factor was defined as the ratio of the number of

detected valid edge pixels over the full length of the circumference of the circle. A circle was validated or rejected solely based on the accumulation factor. This also applied to other three algorithms. As is shown in Fig. 9, the test results for detection errors could be summarized as follows: 1. FACILE was error-free when the threshold was between 0.1 and 0.3. When the threshold went higher than 0.3, its false-negative errors picked up gently but false-positive error count held firmly to zero. 2. In comparison, FCD failed to detect at least 17 or 28.3% of the circles, though it performed almost as well as FACILE in rejecting noncircle objects. We believe the reason for its poor false-negative performance comes primarily from its requirement that any pair of pixels contributing to the accumulation must be half-circumference away from each other. This way, any circular arcs whose spans are less than a semicircle are unable to be detected by FCD. 3. For CHT, a lower threshold caused very high falsepositive errors. On the other hand, a higher threshold caused very high false-negative errors. There existed an optimal point where the total count of the two types of errors reached the minimum, 7 to be exact, when the threshold reached 0.45. At this point, CHT performed better than FCD but still worse than FACILE in detection error. 4. RCHT seemed to be a little more stable than CHT; its total error count lay between 11 and 14. Overall, FACILE performed by far the best among the four algorithms in detection error counts. 3. Detection Accuracy The accuracy of extracted circle information lies in two parts, center position accuracy and radius accuracy. For the 60 circles shown in Fig. 7, FACILE, FCD, CHT, and RCHT

1190

J. Opt. Soc. Am. A / Vol. 30, No. 6 / June 2013

Wu et al.

Fig. 9. Comparison of false-positive error and false-negative error counts on simulated data for four algorithms.

detected different number of circles. For example, FACILE successfully detected all of the 60 circles∕arcs, FCD detected 43, CHT captured 59, and RCHT extracted 50 circles. The radius and center-position accuracy computation for an algorithm was carried out based on the circles that the algorithm successfully detected. Such a comparison was particularly unfair to FACILE because those circles that FACILE successfully detected and others failed were the ones with the largest fraction of their circumferences occluded and so the least information to extract, which tended to affect FACILE’s detection accuracy. In addition, discretization error arising from using edge pixels to represent the originally continuously distributed edge points also played an important role in increasing errors. But it still makes sense to make a quantitative comparison among the four algorithms. Table 1 details the detection results for FACILE, FCD, CHT, and RCHT. For FACILE, the average error for center positions was 0.18 pixels and standard deviation was 0.30 pixels. 93.3% of circles’ centerposition error was under 0.5 pixels, which means if we round the center positions to their nearest integers, 93.3% of center positions would be identical to their actual positions. As for the FACILE’s radius accuracy, the average error was 0.22 pixels and standard deviation was 0.28 pixels, with the maximum radius error at 1.24 pixels. Among the 60 circles, for 93.3% of the circles the radius error was less than 0.5 pixels. As far as the accuracy was concerned, FCD performed the best among the four algorithms, whose radius and center-position error scored a perfect 0 among the 47 circles that it detected. CHT performed better than FACILE but a little worse than FCD. And RCHT came at the worst with its average centerposition and radius error at 0.81 and 0.91 pixels, respectively. B. Experimental Results on Publicly Available Datasets and Discussion In order to more objectively and thoroughly evaluate the performance of FACILE, we also compared FACILE via standard datasets downloaded from a website to Qgar–Lamiroy [24,34],

a state of the art circle/arc-detection algorithm that outperformed all other participants in the GREC 2011 Arc Segmentation Contest [25]. The datasets included 27 images and corresponding ground-truth files, which recorded center, radius, and end-point information for each arc/circle in the pictures. Because one ground-truth file, P061-400dpi.vec, did not contain any arc information, the picture P061-400dpi.bmp was removed from the comparison. We used vector recovery index (VRI) [25,33,35] as the evaluation index to quantify the arc/circle recognition capability of each algorithm. VRI (in the range of [0.1]) is calculated as follows: VRI 

p Dv  1 − F v ;

(11)

where Dv is the detection rate and F v is the false-alarm rate. A higher VRI score indicates better recognition rate. VRI integrates four quality indexes, namely center position, radius, and two end-point positions to perform comprehensive evaluation on the detection data’s degree of conformity to ground truth. It is so rigorous that a detected arc’s deviation of 2 pixels or more from the ground truth in either radius or center position will turn an otherwise successful detection into a failure, in which case both Dv and F v are changed (Dv drops and F v rises). Our performance comparison left out CHT, FCD, and RCHT. The reason for excluding CHT is that the standard datasets contain a large number of arcs or circles whose radius range covered more than 200 pixels, the detection of which by CHT not only required a memory space far beyond our current hardware capacity but also could take a time toll that was too high (1000 s or more) to process a single image. As for FCD, its inherent inability to detect arcs spanning less than a semicircle made it unsuitable for processing the short-arc-dominant standard datasets. Finally, RCHT’s inadequacy in pinpointing center position and radius information of an arc/circle essentially made it a nonperformer in VRI.

Table 1. Radius and Center-Position Accuracy Comparison for Different Circle-Detection Algorithms on the Simulated Dataset Shown in Fig. 7 Algorithm (Number of Circles Detected) FACILE (60) FCD (47) CHT (59) RCHT (50)

Average Center Position Error (pixels)

Center Position Error Standard Deviation (pixels)

Average Radius Error (pixels)

Radius Standard Deviation

0.18 0.0 0.13 0.81

0.30 0.0 0.22 0.63

0.30 0.0 0.03 0.97

0.28 0.0 0.19 0.67

Wu et al.

Vol. 30, No. 6 / June 2013 / J. Opt. Soc. Am. A

1191

Table 2. Performance Score [Dv ,Fv , VRI] Comparison between Qgar–Lamiroy and FACILE Qgar–Lamiroya

FACILE

Dv

Fv

VRI

Dv

Fv

VRI

P061

0.286 0.243

0.160 0.315

0.490 0.408

0.587 0.170

0.022 0.706

0.758b 0.224

P168

0.072 0.320 0.306

0.775 0.547 0.623

0.127 0.381 0.34

0.521 0.496 0.448

0.206 0.236 0.215

0.643 0.616 0.593

P229

0.354 0.384 0

0.225 0.239 1

0.524 0.54 0

0.541 0.567 0.080

0.294 0.124 0.874

0.618 0.705 0.100

P234

0.121 0.695 0.687

0.545 0.1 0.133

0.235 0.791 0.772

0.636 0.651 0.603

0 0.126 0.204

0.797 0.755 0.693

P238

0.139 0.146 0.139

0.308 0.61 0.708

0.311 0.239 0.201

0.541 0.525 0.469

0.540 0.529 0.493

0.499 0.497 0.487

P253

0.507 0.507 0.435

0.355 0.275 0.419

0.572 0.606 0.503

0.410 0.582 0.365

0.426 0.036 0.351

0.485 0.749 0.487

P254

0.106 0.178 0.198

0.365 0.572 0.594

0.260 0.276 0.283

0.303 0.275 0.127

0.424 0.417 0.761

0.425 0.400 0.174

P260A

0.054 0.205 0.125

0.213 0.227 0.576

0.207 0.398 0.230

0.314 0.355 0.359

0.385 0.501 0.518

0.439 0.421 0.416

P260B

0.074 0.176 0.084

0.498 0.700 0.866

0.193 0.230 0.106

0.105 0.044 0.040

0.260 0.798 0.819

0.278 0.094 0.085

Avg

0.252

0.460

0.355

0.389

0.395

0.478

Image c

a

Qgar–Lamiroy data comes from Table 2 of Al-Khaffaf et al. [25]. b Highest VRI score in each resolution is shown in bold. c First, second, and third row of each image correspond to 200, 300, and 400 dots per inch (DPI) except that P061 does not contain the 400 DPI image.

The results are shown in Table 2 for Qgar–Lamiroy and FACILE. Please note that the contrast data by Qgar–Lamiroy in Table 2 comes from Table 2 of Al-Khaffaf et al. [25]. Based on Table 2, as far as VRI was concerned, FACILE outscored Qgar–Lamiroy in 18 images, and the latter prevailed in 8 images. Overall, FACILE’s average VRI score was 0.478, approximately 35% better than 0.355 by Qgar–Lamiroy. More specifically, FACILE performed better in both Dv and F v on average, which means FACILE had a higher detection rate and lower false-alarm rate in comparison. FACILE’s average detection rate of 0.389 was about 1.54 times Qgar–Lamiroy’s at 0.252, while its false-alarm rate of 0.395 was about 14% lower than that of Qgar–Lamiroy. From the tests, we found that FACILE seemed to perform the best with the lower-resolution images, at least as far as VRI was concerned. Due to the current implementation’s limitation, FACILE had difficulty in detecting any arcs that intersected with other lines in a way that no subarcs of it were more than 45° in span. In addition, FACILE tended to be confused by concentric circles or arcs whose radii were less than 5 pixels apart and was prone to mistake them as belonging to the same arc/circle during the merging stage. Last but not least, we also tested the processing speed of FACILE. On the 26 images, it recorded the average processing

time, including edge detection and arc/circle detection, at 61.0 ms, well into the real-time detection zone. The preliminary testing on Qgar–Lamiroy implementation showed that it took more than 1 s to process an image on average. Although Qgar–Lamiroy implementation was not tuned for execution speed optimization, we still tend to believe FACILE was a significant improvement over Qgar–Lamiroy with regard to processing speed.

5. CONCLUSION We have developed an extremely fast and highly reliable and accurate algorithm to detect circles. It is able to detect entwined circles, concentric circles, and partly occluded circles with high precision. It is our belief that this algorithm is by far the fastest algorithm so far to reliably detect circles or arcs. The future work will be to improve our algorithm to handle the detection of broken arcs that are less than 45° in span.

ACKNOWLEDGMENTS This research was supported by the Science and Technology Plan Project of Suzhou (No. SGZ2012061), the Suzhou Vocational University Project (Nos. 2012SZDYY06 and 2012SZDYY05), and the Opening Project of Jiangsu Province Support Software Engineering R&D Center for

1192

J. Opt. Soc. Am. A / Vol. 30, No. 6 / June 2013

Modern Information Technology Application in Enterprise (No. SX201203).

Wu et al.

18.

REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

11. 12. 13. 14. 15. 16.

17.

P. V. C. Hough, “Method and means for recognizing complex patterns,” U.S. patent 3,069,654 (18 December 1962). R. Duda and P. Hart, “Use of the Hough transform to detect lines and curves in pictures,” Commun. ACM 15, 11–15 (1972). E. Davies, “A modified Hough scheme for general circle location,” Pattern Recogn. Lett. 7, 37–43 (1988). P. Kierkegaard, “A method for detection of circular arcs based on the Hough transform,” Machine Vis. Appl. 5, 249–263 (1992). T. Atherton and D. Kerbyson, “Using phase to represent radius in the coherent circle Hough transform,” in Proceedings of IEE Colloquium on the Hough Transform (IEE, 1993), paper 5. C. Kimme, D. Ballard, and J. Sklansky, “Finding circles by an array of accumulators,” Proc. ACM 18, 120–122 (1975). L. Minor and J. Sklansky, “Detection and segmentation of blobs in infrared images,” IEEE Trans. Syst. Man Cybern 11, 194–201 (1981). T. Atherton and D. Kerbyson, “Size invariant circle detection,” Image Vis. Comput. 17, 196–803 (1999). D. Kerbyson and T. Atherton, “Circle detection using Hough transform filters,” in IEE Conference on Image Processing and Its Applications (IEE, 1995), pp. 370–374. F. Nashashibi, A. Bafgeton, F. Moutarde, and B. Bradai, “Method of circle detection in images for round traffic sign identification and vehicle driving assistance device,” World Intellectual Property Organization patent WO2012076036 (14 June 2012). R. Dave, “Fuzzy shell-clustering and applications to circle detection in digital images,” Int. J. Gen. Syst. 16, 343–355 (1990). R. Dave, “Generalized fuzzy c-shells clustering and detection of circular and elliptical boundaries,” Pattern Recogn. 25, 713–721 (1992). J. Bezdek and R. Hathaway, “Numerical convergence and interpretation of the fuzzy c-shell clustering algorithm,” IEEE Trans. Neural Netw. 3, 787–793 (1992). R. Krishnapuram, O. Nasraoui, and H. Frigui, “The fuzzy C spherical shells algorithm: a new approach,” IEEE Trans. Neural Netw. 3, 663–671 (1992). G. Schuster and A. Katsaggelos, “Robust circle detection using a weighted MSE estimator,” in International Conference on Image Processing (ICIP) (IEEE, 2004), pp. 2111–2114. M. Ceccarelli, A. Petrosino, and G. Laccetti, “Circle detection based on orientation matching,” in 11th International Conference on Image Analysis and Processing Proceedings (IEEE, 2001), pp. 119–124. A. Rad, K. Faez, and N. Qaragozlou, “Fast circle detection using gradient pair vectors,” in Proceedings of 7th Digital Image

19. 20. 21. 22. 23.

24.

25.

26. 27. 28. 29. 30. 31. 32. 33. 34. 35.

Computing: Techniques and Applications (CSIRO, 2003), pp. 10–12. J. Yao, “Fast robust genetic algorithm based ellipse detection,” in 17th International Conference on Pattern Recognition (IEEE, 2004), Vol. 2, pp. 859–862. V. Ayala-Ramirez, C. H. Garcia-Capulin, A. Perez-Garcia, and R. E. Sanchez-Yanez, “Circle detection on images using genetic algorithm,” Pattern Recogn. Lett. 27, 652–657 (2006). S. Dasgupta, S. Das, A. Biswas, and A. Abraham, “Automatic circle detection on digital images using an adaptive bacterial foraging algorithm,” Soft Comput. A 14, 1151–1164 (2009). L. Xu, E. Oja, and P. Kultanen, “A new curve detection method: randomized Hough transform,” Pattern Recogn. Lett. 11, 331–338 (1990). T. C. Chen and K. Chung, “An efficient randomized algorithm for detecting circles,” Comput. Vis. Image Underst. 83, 172–191 (2001). K. Chung and Y. Huang, “Speed up the computation of randomized algorithms for detecting lines, circles, and ellipses using novel tuning-and-LUT-based voting platform,” Appl. Math. Comput. 190, 132–149 (2007). B. Lamiroy and Y. Guebbas, “Robust and precise circular arc detection,” in Graphics Recognition, Achievements, Challenges, and Evolution, Vol. 6020 of Lecture Notes in Computer Science (Springer, 2010), pp. 49–60. H. Al-Khaffaf, A. Talib, and M. Osman, “Final report of GREC’11 arc segmentation contest: performance evaluation on multiresolution scanned documents,” in Graphics Recognition: New Trends and Challenges, Vol. 7423 of Lecture Notes in Computer Science (Springer, 2013), pp. 187–197. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8, 679–698 (1986). W. Gander, G. Golub, and R. Strebel, “Least-squares fitting of circles and ellipses,” BIT 34, 558–578 (1994). A. Fitzgibbon, M. Pilu, and R. Fisher, “Direct least square fitting of ellipses,” IEEE Trans. Pattern Anal. Mach. Intell. 21, 476–480 (1999). I. Kasa, “A circle fitting procedure and its error analysis,” IEEE Trans. Instrum. Meas. IM-25, 8–14 (1976). D. Umbach and K. N. Jones, “A few methods for fitting circles to data,” IEEE Trans. Instrum. Meas. 52, 1881–1885 (2003). Qgar Software, http://www.qgar.org. “Inria Forge,” http://gforge.inria.fr/projects/visuvocab/. W. Y. Liu and D. Dori, “A protocol for performance evaluation of line detection algorithms,” Machine Vis. Appl 9, 240–250 (1997). “GREC’11 Arc Segmentation Contest,” http://www.cs.usm.my/ arcseg2011. http://www.cs.cityu.edu.hk/~liuwy/ArcContest/ArcContest2005 .zip.

Fast and accurate circle detection using gradient-direction-based segmentation.

We present what is to our knowledge the first-ever fitting-based circle detection algorithm, namely, the fast and accurate circle (FACILE) detection a...
701KB Sizes 0 Downloads 0 Views