c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

journal homepage: www.intl.elsevierhealth.com/journals/cmpb

Automated detection of fovea in fundus images based on vessel-free zone and adaptive Gaussian template E-Fong Kao a,∗ , Pi-Chen Lin b , Ming-Chung Chou a , Twei-Shiun Jaw c , Gin-Chung Liu c a

Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan Division of Endocrinology and Metabolism, Department of Internal Medicine, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan c Department of Medical Imaging, Kaohsiung Medical University Hospital, Kaohsiung, Taiwan b

a r t i c l e

i n f o

a b s t r a c t

Article history:

This study developed a computerised method for fovea centre detection in fundus images.

Received 8 February 2014

In the method, the centre of the optic disc was localised first by the template matching

Received in revised form

method, the disc–fovea axis (a line connecting the optic disc centre and the fovea) was then

6 August 2014

determined by searching the vessel-free region, and finally the fovea centre was detected

Accepted 6 August 2014

by matching the fovea template around the centre of the axis. Adaptive Gaussian templates were used to localise the centres of the optic disc and fovea for the images with different

Keywords:

resolutions. The proposed method was evaluated using three publicly available databases

Fovea centre detection

(DIARETDB0, DIARETDB1 and MESSIDOR), which consisted of a total of 1419 fundus images

Fundus image

with different resolutions. The proposed method obtained the fovea detection accuracies of

Vessel-free zone

93.1%, 92.1% and 97.8% for the DIARETDB0, DIARETDB1 and MESSIDOR databases, respec-

Gaussian template

tively. The overall accuracy of the proposed method was 97.0% in this study. © 2014 Elsevier Ireland Ltd. All rights reserved.

1.

Introduction

The fovea is a part of the eye located in the centre of the macular region of the retina, and is responsible for sharp central vision. In a fundus image, a dark circular area represents the macula, and the fovea is located at the centre of the dark region. Damage to the macular region would cause vision loss or blindness. The lesions located close to the fovea would cause more serious damage, and the distance between the lesions and the fovea has clinical relevance

to the extent of damage [1]. Hence, for automated analysis of the fundus retinal images, detection of the fovea is essential. Several methods for the fovea detection have been proposed, most of which were performed in two stages. In the first stage, a constraining region was defined for searching the fovea. In the second stage, an algorithm for the fovea detection was applied in the defined region. In addition to the two-stage approach, Niemeijer et al. [2] proposed a method to directly obtain the fovea location by a point distribution model; Singh et al. [3] proposed an appearance-based method for fovea

∗ Corresponding author at: Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung 807, Taiwan. Tel.: +886 07 312 1101 2358; fax: +886 07 311 3449. E-mail address: [email protected] (E.-F. Kao). http://dx.doi.org/10.1016/j.cmpb.2014.08.003 0169-2607/© 2014 Elsevier Ireland Ltd. All rights reserved.

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

detection; Kovacs et al. [4] presented a framework that used a combination of different methods for fovea detection. In the two-stage approach, constraining the search area in the first stage could reduce the computation time and avoid the disturbance from other anatomic structures. Most of the methods for constraining the search area used the information on the optic disc, namely the optic disc diameter and the optic disc centre. It assumed that the fovea is located at a distance that is approximately 2.5 times the optic disc diameter from the optic disc centre. These constraining methods can be categorised into three types: (1) using the training method to find the search area [5]; (2) constraining a region with the fixed location relative to the optic disc centre [6–11]; (3) constraining a region with the adaptive location based on the vascular information [12–16]. Besides constraining the search area, the algorithms to precisely locate the fovea in the second stage can also be divided into four categories: (1) template matching based method [17,18]; (2) thresholding based method [6,9,12,16]; (3) minimum-density based method [7,8,10,11,14]; (4) feature based method [5,15]. Constraining the search region with the fixed location relative to the optic disc centre may pose problem when the fovea does not locate nearby the centre of a fundus image (the location of the fovea is far from the horizontal line passing through the centre of the optic disc). This problem could be solved by constraining the search region with the adaptive location based on the vascular information. However, obtaining vascular information needs segmentation of the vessels and is time-consuming. In this study, a two-stage method was proposed for the fovea centre detection, which constrained the search region adaptively without segmentation of the vessels. In the first stage, the proposed method constrained the search region with the adaptive location based on the vessel-free region and did not need the segmentation of vessels. In the second stage, an adaptive Gaussian template was used to localise the fovea centre in the search region for the images with different resolutions. Three publicly available databases with different resolutions were used to evaluate the method, and the fovea detection accuracy was assessed in this study.

2.

Materials and methods

2.1.

Image databases

The proposed method was evaluated on three publicly available databases: DIARETDB0 [19], DIARETDB1 [20] and MESSIDOR [21]. The DIARETDB0 and DIARETDB1 databases consisted of 130 and 89 colour fundus images with a size of 1500 by 1152 pixels. The MESSIDOR database (1200 fundus images) included 588, 400 and 212 images with a size of 1440 × 960, 2240 × 1488 and 2304 × 1536 pixels, respectively. Each pixel in a colour fundus image included three components, namely red, green and blue. The value of each component was quantised to 256 grey levels. The fundus images with different resolutions were used to evaluate the proposed method in this study.

93

Fig. 1 – Overall scheme of the proposed method for the fovea centre detection in the fundus images.

2.2.

Overall scheme of the proposed method

The overall scheme of the proposed method for the fovea centre detection is illustrated in Fig. 1. In the first step, a fundus image is processed to obtain the information used in the following steps. In the second step, the optic disc centre is localised. In the third step, the disc–fovea axis is determined, based on the vessel-free region. Finally, the fovea centre is localised around the centre of the disc–fovea axis. Each step of the proposed method is described in detail as follows.

2.3.

Pre-process procedures

To localise the fovea centre, some derivative images and information need to be obtained by processing the original fundus image. These derivative images and information, which would be used in the following procedures, are described and defined in this section.

2.3.1.

Masked green component image

A colour fundus image comprises red, green and blue components. In this study, the green component of the image I(x,y) was used to detect the fovea centre, as shown in Fig. 2(a). Because only the pixels in the field of view (FOV) need to be used, the FOV mask was first obtained. The values of the pixels outside the field of view were close to zero. A threshold value, Tmask , was determined from the histogram of the green component image by searching a minimum point between 0 and 50, as shown in Fig. 2(b). A binary image was then obtained by thresholding the green component image with Tmask , as shown in Fig. 2(c). Because only the pixels in the bright region (FOV) were needed, the green component image was masked by the binary image, as shown in Fig. 2(d). The masked green component image was used for the fovea detection by the following procedures.

2.3.2.

Diameter and centre of FOV

After determining the FOV mask, the diameter (DFOV ) and the centre (XFOV , YFOV ) of FOV can be further determined, as

94

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

Fig. 2 – (a) The green component image of a colour fundus image. (b) The histogram of the green component image. (c) The mask of the field of view. (d) The masked green component image.

shown in Fig. 2(c). Assuming the area of the bright region as A,  the diameter of FOV could be determined by 2 × A/, and the centre of FOV could be determined by the mass centre of the bright region. In this study, DFOV was used as a reference length for normalisation between the images of different resolutions.

2.3.3.

Masked gradient image

In addition to the green component image containing the density information, another derivative image containing the gradient information was used in this study. Fig. 3(b) demonstrates a gradient image by applying gradient operation to a green component image in Fig. 3(a). The operation of the gradient is defined by Eq. (1):

Igradient (x, y) = |I(x − 1, y + 1) + I(x, y + 1) + I(x + 1, y + 1) − I(x − 1, y − 1) − I(x, y − 1) − I(x + 1, y − 1)| + |I(x + 1, y − 1) + I(x + 1, y) + I(x + 1, y + 1) − I(x − 1, y − 1) − I(x − 1, y) − I(x − 1, y + 1)|

(1)

In addition to the blood vessels, the FOV boundary, the optic disc and the exudates (bright lesions) have large gradient values, as shown in Fig. 3(b). These structures with large gradient values may pose problems for subsequent procedures and need to be eliminated. For the elimination of large gradient values corresponding to the FOV boundary, a mask can be used by applying the erosion operation to the mask shown in Fig. 2(c). The erosion operation causes the mask to shrink inward by 6 pixels, and the large gradient values corresponding to the FOV boundary fall outside the mask. Therefore, the shrunken mask can eliminate the large gradient values. For the large gradient values corresponding to the exudates and the optic disc, the use of another mask is necessary. From our observations, the exudates and the optic disc are characterised by a yellowish colour. A mask obtained by applying the colour information was used to eliminate the large gradient values corresponding to the exudates and the optic disc. To obtain the mask, the RGB colour co-ordinate was converted to the HSI colour co-ordinate [22]. The H value represents the hue of colour; when H = 0◦ , the colour is red; when H = 60◦ , the colour is yellow. The yellowish colour of the exudates and the optic disc should correspond with an H value larger than that for the background and the blood vessels. To mask the pixels

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

95

Fig. 3 – (a) The green component image of a colour fundus image with exudates. (b) The gradient image. (c) The mask for large H values. (d) The masked gradient image.

with large H values, the mean (H ) and standard deviation (H ) of H values in a fundus image were determined, along with threshold (H + H ). The threshold was used to obtain a mask for large H values, as shown in Fig. 3(c). Finally, the shrunken mask for the FOV boundary and the mask for large H values were used to eliminate the corresponding pixels to obtain the masked gradient image, as shown in Fig. 3(d).

2.4.

Detection algorithm

2.4.1.

Templates for the fovea and optic disc

The method proposed in this study was based on the template matching method. The templates used for detecting the fovea and the optic disc centre were constructed by Gaussian functions. A 2-D Gaussian function is expressed in Eq. (2): −((x−x0 )2 /2x2 +(y−y0 )2 /2y2 )

f (x, y) = e

(2)

where x0 , y0 is the centre and  x and  y are the standard deviations in the x and y directions. The standard deviations,  x and  y , control the width of the bell shape in the x and y directions and are the parameters to be deduced for the templates. For a 1-D Gaussian distribution function, the relationship [23]

between the full width at half maximum (FWHM) and the standard deviation () is described by Eq. (3): √ FWHM = 2 2 ln 2 ≈ 2.35482

(3)

This relationship was used to determine the standard deviations of the Gaussian functions for the templates used in this study. Fig. 4(a) demonstrates a profile intersecting the fovea centre, and FWHM for the fovea can be obtained by analysing the profile. FWHM for the optic disc can be similarly obtained. To overcome drawbacks associated with images of different resolutions, the length used in this study was normalised to FOV diameter (DFOV ). Following the analysis of 100 fundus images with different resolutions, the ratio between DFOV and FWHM for the fovea (FWHMfovea ) was 14.4 ± 2.2 (FWHMfovea = DFOV /14.4) and the ratio between DFOV and FWHM for the optic disc (FWHMOD ) was 13.2 ± 2.8 (FWHMOD = DFOV /13.2). To further determine the standard deviations obtained by Eq. (3) for the fovea and the optic disc, the following equations were applied: fovea ≈

OD ≈

DFOV 34

DFOV 31

(4)

(5)

96

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

Fig. 4 – (a) FWHM for the fovea is determined from the profile intersecting the fovea centre. (b) The templates for the fovea and optic disc.

Using the results described above, the fovea template was constructed by a Gaussian function, with the standard deviation of DFOV /34 as defined by Eq. (6): −((x2 /2 2

Tfovea (x, y) = 255 − 255 × e

fovea

)+(y2 /2 2

fovea

COD (x, y) =

))

(6)



FWHMOD

1 (2FWHMOD )

2.4.2.

Localisation of the optic disc centre

To define the region for seeking the fovea centre, the centre of the optic disc was initially determined by matching the optic disc template in the two regions, as shown in Fig. 5(a). The correlation value at each location for matching the optic disc is defined by Eq. (8):



FWHMOD

2 i=−FWHMOD j=−FWHMOD

(I(x + i, y + j) − ¯I)(TOD (i, j) − T¯ OD ) I TOD

(8)

where ¯I =



FWHMOD

1 2(FWHMOD )

I =



FWHMOD

I(x + i, y + j),

2

  FWHM FWHMOD   OD 2  (I(x + i, y + j) − ¯I)   i=−FWHMOD j=−FWHMOD i=−FWHMOD j=−FWHMOD

(2FWHMOD )

2

Subsequently, the optic disc template was constructed by a Gaussian function, with the standard deviation of DFOV /31 as defined by Eq. (7): TOD (x, y) = 255 × e−((x

2 /2 2 )+(y2 /2 2 )) OD OD

T¯ OD =

(7)

In Eqs. (6) and (7), the constant value, 255, is the dynamic range of a green component image. From our observations, the optic disc or the macular region could be covered with the Gaussian template sized at 2FWHM × 2FWHM entirely. The optic disc template with a size of 2FWHMOD × 2FWHMOD was constructed to cover with the optic disc. However, the fovea is located at the centre of the dark region in the macula. To localise the fovea centre precisely, instead of using the template with a size of 2FWHMfovea × 2FWHMfovea to cover with the macular region, the fovea template was constructed with a size of FWHMfovea × FWHMfovea to cover the dense region around the fovea in this study. The examples of the templates for the fovea and optic disc are demonstrated in Fig. 4(b).

,

TOD =





FWHMOD

FWHMOD

1 (2FWHMOD )

2

TOD (i, j),

  FWHM FWHMOD   OD 2  (TOD (i, j) − T¯ OD )   i=−FWHMOD j=−FWHMOD i=−FWHMOD j=−FWHMOD

(2FWHMOD )

2

Among the two search regions, the location at which the correlation value COD is maximum was used as the centre of the optic disc and the side (left or right) on which the optic disc locates in a fundus image was also determined at the same time, as shown in Fig. 5(b).

2.4.3.

Determination of the disc–fovea axis

Following the location of the optic disc centre, the direction of the fovea relative to the optic disc was determined to be a precise area in which the fovea lies. Li and Chutatape [12], Tobin et al. [13] and Fleming et al. [14] have proposed the methods for determining the direction by indirectly modelling blood vessels. Our proposed method directly detected the direction in this study without the segmentation of blood vessels. As shown in Fig. 6(a), the disc–fovea axis, a line connecting the optic disc and the fovea, would lie in a region without blood vessels. This feature was used in determining the direction of the fovea relative to the optic disc in this study. The direction was determined using the lines originating from the optic disc centre to the FOV boundary to scan the masked gradient image in different directions, and the scans were performed

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

97

Fig. 5 – (a) The two regions for locating the centre of the optic disc. (b) The location with maximum correlation value is used as the centre of the optic disc.

from −20◦ to 20◦ , as shown in Fig. 6(b). The mean gradient value Gm for each line was calculated by averaging the pixels on each line in the masked gradient image. Fig. 6(c) shows the distribution of Gm in the different directions. The passage of the scanning line through the vessels would result in a large Gm value, which would otherwise be small for the vessel-free region. Hence, a line with a minimum Gm was used as the disc–fovea axis in this study, as shown in Fig. 6(d).

2.4.4.

Localisation of the fovea centre

The fovea centre was determined in the last step. Fig. 7(a) demonstrates that the centre of the disc–fovea axis was close to the fovea centre. For the precise localisation of the fovea centre, the template matching method was further applied in a small region with a size of 4FWHMfovea × 4FWHMfovea around the centre of the axis. The correlation value between the fovea template and the subimage with a size of

Fig. 6 – (a) The disc–fovea axis lies in the vessel-free region in a fundus image. (b) The line scans are performed from −20◦ to 20◦ to determine the disc–fovea axis in the masked gradient image. (c) The distribution of the mean gradient value (Gm ) in the different directions. (d) A line with a minimum Gm is used as the disc–fovea axis.

98

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

Fig. 7 – (a) The search region for the fovea centre detection. (b) The location corresponding to the maximum correlation value is considered as the fovea centre.

FWHMfovea × FWHMfovea at each location is defined by Eq. (9):

Cfovea (x, y) =

FWHMfovea /2



1 FWHMfovea 2

FWHMfovea /2



i=−FWHMfovea /2j=−FWHMfovea /2

inside the dense area of the macula and within the circle with a radius of FWHMfovea /2, as shown in Fig. 8.

(I(x + i, y + j) − ¯I)(Tfovea (i, j) − T¯ fovea ) I Tfovea

(9)

where ¯I =

FWHMfovea /2



1 FWHMfovea

I =

FWHMfovea /2



2

I(x + i, y + j),

i=−FWHMfovea /2j=−FWHMfovea /2   FWHM FWHMfovea /2 fovea /2    2  (I(x + i, y + j) − ¯I)   i=−FWHMfovea /2j=−FWHMfovea /2

FWHMfovea 2

,

T¯ fovea =

Tfovea =

FWHMfovea /2



1 FWHMfovea

FWHMfovea /2

2



Tfovea (i, j)

i=−FWHMfovea /2j=−FWHMfovea /2   FWHM FWHMfovea /2 fovea /2    2  (Tfovea (i, j) − T¯ fovea )   i=−FWHMfovea /2j=−FWHMfovea /2

FWHMfovea 2

In the search region, the location corresponding to the maximum correlation value was considered as the fovea centre in this study, as shown in Fig. 7(b).

3. 2.5.

Results

Evaluation method

A specialist annotated the location of the fovea centre for each fundus image used in this study by using software developed in-house, as shown in Fig. 8(a). For those images without a visible fovea, the expert estimated the locations. These annotated locations were then used as the gold standards to evaluate the proposed method. In the present study, we measured the accuracy of the proposed method based on the target, which represents the relative location between the detected locations and the gold standards, as shown in Fig. 8(b). The centre of the target was the location for the gold standards, whereas the errors, which were determined by the distance between the detected locations and the gold standards, would be distributed around the centre. In the current study, an error smaller than FWHMfovea /2 (≈DFOV /28) was considered to be a success in fovea detection, if not, the detection was deemed to have failed. Thus, a successfully detected location would fall

The proposed method was evaluated on the fundus images with different resolutions. Three publicly available databases, MESSIDOR, DIARETDB0 and DIARETDB1, were used in this study. The MESSIDOR database consisted of 1200 fundus images with three different resolutions (1440 × 960, 2240 × 1488 and 2304 × 1536). The images with different resolutions in the MESSIDOR database were separately evaluated. Fig. 9(a) and (b) presents the distributions of the errors for the DIARETDB0 and DIARETDB1 databases, respectively, whereas Fig. 10(a)–(c), present the error distributions for the MESSIDOR database with three different image sizes. A circle with a radius of FWHMfovea /2 for each dataset was used to measure the detection accuracy, with the detection considered a success if the detection error fell inside the circle (error < FWHMfovea /2). Most of the cases that were analysed fell inside the circles and close to the centres of the targets, as shown in Figs. 9 and 10. This means that the locations of

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

99

Fig. 8 – (a) A circle with a radius of FWHMfovea /2 at the gold standard location is used to measure the accuracy of the proposed method. (b) The target represents the relative location between the detected locations and the gold standards.

the fovea centre, as detected by the proposed method, were close to the corresponding reference positions annotated by the expert. Table 1 summarises the detection results for the proposed method when applied to the five datasets. The proposed method obtained the fovea detection accuracies of 93.1% (121/130), 92.1% (82/89) and 97.8% (1174/1200) for the DIARETDB0, DIARETDB1 and MESSIDOR databases, respectively. Tables 2 and 3 show the fovea detection accuracies in terms of the disease stages provided by the MESSIDOR database. The overall accuracy of the proposed method for the fovea centre detection was 0.970 (1377/1419) in this study.

4.

Discussion

The present study developed a computerised method to detect the fovea centre in fundus images. In the method, the centre of the optic disc was initially localised by the template matching method, the disc–fovea axis was then determined by searching the vessel-free region and finally the fovea centre was detected

by matching the fovea template in an area around the centre of the disc–fovea axis. To apply the method to the images with different resolutions, adaptive Gaussian templates were used to localise the centres of the optic disc and fovea in this study. The proposed method was evaluated using the publicly available databases with different resolutions. FWHM for the fovea was used to define the area in which a successful detection should fall. The fovea detection accuracies were 93.1%, 92.1% and 97.8% for the DIARETDB0, DIARETDB1 and MESSIDOR databases, respectively. The robustness of the method is illustrated in Tables 2 and 3. The overall accuracy of the proposed method was 97.0% in this study. The software was written in C + + and tested on an Intel(R) Core(TM) i7-3610QM 2.30 GHz PC. Average processing times were 2.6, 6.1, 6.9 and 5.3 s for the images with sizes of 1440 × 960, 2240 × 1488, 2304 × 1536 and 1500 × 1152, respectively. Most of the proposed methods for the fovea detection were performed in two stages; in the first stage, the region for searching the fovea was defined and in the second stage, the algorithm for the fovea detection was applied to the defined region. The novelties of our proposed method are discussed

Fig. 9 – Error distributions of the (a) DIARETDB0 and the (b) DIARETDB1 databases.

100

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

Fig. 10 – Error distributions for the fundus images from the MESSIDOR database, with sizes of (a) 1440 × 960, (b) 2240 × 1488 and (c) 2304 × 1536 pixels.

Table 1 – The detection results of the proposed method for the DIARETDB0, DIARETDB1 and MESSIDOR databases. Database

Image size (pixels)

DIARETDB0 DIARETDB1 MESSIDOR MESSIDOR MESSIDOR

1500 × 1152 1500 × 1152 1440 × 960 2240 × 1488 2304 × 1536

FWHMfovea /2 (pixels) 50 50 32.5 49 52

Total number of images 130 89 588 400 212

Number of successful detections 121 82 578 387 209

Table 2 – The fovea detection accuracies in terms of retinopathy grade for the MESSIDOR database. Retinopathy grade 0 1 2 3

Total number of images 546 153 247 254

0 (normal): (␮A = 0) AND (H = 0). 1: (0 < ␮A ≤ 5) AND (H = 0). 2: ((5 < ␮A < 15) OR (0 < H < 5)) AND (NV = 0). 3: (␮A ≥ 15) OR (H ≥ 5) OR (NV = 1). ␮A: number of microaneurysms. H: number of haemorrhages. NV = 1: neovascularization, NV = 0: no neovascularization.

Number of successful detections

Detection accuracy

543 153 244 234

99.4% 100.0% 98.8% 92.1%

101

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

Table 3 – The fovea detection accuracies in terms of risk of macular oedema for the MESSIDOR database. Risk of macular oedema 0 1 2

Total number Number of of images successful detections 974 75 151

964 74 136

Detection accuracy 99.0% 98.6% 90.0%

Hard exudates have been used to grade the risk of macular oedema. 0 (no risk): no visible hard exudates. 1: shortest distance between macula and hard exudates > one papilla diameter. 2: shortest distance between macula and hard exudates ≤ one papilla diameter.

according to the stages as follows. For constraining a search region, some proposed method assumed that the fovea and the optic disc centre were located in the same level and constrained the search region with the fixed location relative to the optic disc centre. This may cause a failure in the fovea detection when the fovea locates far from the horizontal line of the optic disc centre, as shown in Fig. 6(a). This problem can be overcome by determining the direction of the fovea relative to the optic disc centre. Li and Chutatape [12], Tobin et al. [13] and Fleming et al. [14] have proposed the methods that determined the direction by constructing a line connecting the optic disc centre and the fovea (disc–fovea axis). In the methods, segmentation of the vessels needed to be performed first and the disc–fovea axis was determined by modelling the blood vessels. In our present study, we proposed a sample method to determine the disc–fovea axis without segmentation of the vessels. The disc–fovea axis was determined directly, based on the feature that the axis should locate in the vessel-free region in a fundus image. Determining the axis directly can reduce the computation complexity without segmentation of the vessels. Subsequently, the template matching method was used to detect the fovea around the centre of the axis. Sinthanayothin et al. [17] have proposed a template matching method for the fovea detection using a template of Gaussian distribution, wherein the standard deviation of Gaussian distribution was 22 pixels. The Gaussian template with a fixed standard deviation posed a problem when applied to the images in this study that had different resolutions. To overcome this problem, we proposed using adaptive Gaussian templates with variable standard deviations in accordance with the FOV sizes. Regardless of the resolutions of the fundus images, the standard deviation of a Gaussian function for the fovea was determined to be 1/34th of the FOV diameter using Eq. (4) in our study. The measurement of accuracy is an important aspect for evaluating the methods of the fovea detection, a topic not mentioned a lot in the relevant literature. The experts who observed the detection results obtained the accuracies in most study. Niemeijer et al. [2] proposed using a fixed distance for measuring the fovea detection accuracy, wherein the detection of an error that falls within 50 pixels for a FOV with a diameter of 530 pixels was considered successful. A similar method was used by Zhang et al. [10] and Welfer et al. [11]; the criterion used by Zhang et al. was that the error should fall

Table 4 – Comparison of the fovea detection accuracy for the proposed and other methods available in the literature. Methods Sinthanayothin et al. [17] Li and Chutatape [12] Niemeijer et al. [2] Sagar et al. [7] Siddalingaswamy et al. [8] Tobin et al. [13] Fleming et al. [14] Singh et al. [3] Sekhar et al. [9] Niemeijer et al. [5] Kovacs et al. [4] Zhang et al. [10] Ying et al. [15] Welfer et al. [11] Samanta et al. [16] Proposed method

Number of tested images

Detection accuracy

112

80.4%

89

100%

500

94.4%

100 50

96% 94%

345 1056 502 34 500

92.5% 96.5% 96% 100% 96.8%

259 107 37, 61 37, 89 20, 35

96.2% 98.1% 100%, 93.4% 100%, 92.13% 100%, 97%

1419

97%

within 30 pixels for the images with a size of 968 × 644 pixels, whereas Welfer et al. used the criterion that the error should fall within 34 pixels for the images with a size of 640 × 480 pixels. In the present study, we introduced an idea of using FWHM to measure the fovea detection accuracy. The relationship between FWHM for the fovea and the FOV diameter (FWHMfovea ≈ DFOV /14) was obtained by analysing the profiles intersecting the fovea centre. A circle with a radius of FWHMfovea /2 (DFOV /28) was used to measure the fovea detection accuracy. Based on the criterion used in this study, the errors should fall within 19, 23 and 17 pixels for the methods proposed by Niemeijer et al. [2], Zhang et al. [10] and Welfer et al. [11], respectively. A comparison with the other criteria described above, the criterion used in our study was more critical than those used by the other groups. Additionally, our method overcame the problem associated with evaluating images with different resolutions. Table 4 summarises the fovea detection accuracy for the proposed and other methods available in the literature. To compare the existing two-stage approaches for fovea detection on the same dataset, the proposed method shows the detection accuracy (92.1%) comparable to the results (92.13%) reported by Welfer et al. [11] based on the DIARETDB1 database. We identified three main reasons for the failure in successfully detecting fovea in this study: (1) failure in determining the optic disc centre [Fig. 11(a)], (2) failure in determining the disc–fovea axis [Fig. 11(b)] and (3) an invisible fovea together with dark lesions such as haemorrhages or dark structures in the fovea search region [Fig. 11(c)]. The first two reasons influence the first stage of constraining the fovea search region and the third reason influences the second stage of a precise search for the fovea centre. The method can be improved upon by reflecting on several aspects of the process. In this study, the optic disc centre detection accuracy was 95.6% (1357/1419).

102

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

Fig. 11 – (a) Failure in determining the optic disc centre. (b) Failure in determining the disc–fovea axis. (c) Failure in determining the fovea centre caused by invisible fovea, together with dark lesions in the fovea search region.

Among the cases of which the optic discs were not detected correctly, 11.3% of the cases (7/62) further resulted in the failure of fovea detection. For the failure in optic disc centre detection, a more robust method needs to be developed. For the failure in disc–fovea axis determination, the non-vessel structures with high gradient values could be further eliminated in the pre-process procedure. For rejecting the detection not belonging to a fovea, the features of dark lesions and structures should be further analysed.

5.

Conclusion

The present study developed a computerised method for the fovea centre detection in the fundus images. The results indicate that the proposed method yielded high detection accuracy and could be used as a module for grading [24] the severity of retinal abnormality in the fundus images.

Acknowledgments The authors thank the DIARETDB0, DIARETDB1 and MESSIDOR project teams for making available their image databases on the internet.

references

[1] M.D. Davis, S.B. Bressler, L.P. Aiello, N.M. Bressler, D.J. Browning, C.J. Flaxel, D.S. Fong, W.J. Foster, A.R. Glassman, M.E. Hartnett, C. Kollman, H.K. Li, H. Qin, I.U. Scott, Comparison of time-domain OCT and fundus photographic assessments of retinal thickening in eyes with diabetic macular edema, Investig. Ophthalmol. Visual Sci. 49 (2008) 1745–1752. [2] M. Niemeijer, M.D. Abramoff, B. van Ginneken, Segmentation of the optic disc, macula and vascular arch in fundus photographs, IEEE Trans. Med. Imaging 26 (2007) 116–127. [3] J. Singh, G.D. Joshi, J. Sivaswamy, Appearance based object detection in colour retinal images, in: IEEE International Conference on Image Processing, San Diego, USA, 2008, pp. 1432–1435. [4] L. Kovacs, R.J. Qureshi, B. Nagy, B. Harangi, A. Hajdu, Graph based detection of optic disc and fovea in retinal images, in: 4th International Workshop on Soft Computing Applications, Arad, Romania, 2010, pp. 143–148. [5] M. Niemeijer, M.D. Abramoff, B. van Ginneken, Fast detection of the optic disc and fovea in color fundus photographs, Med. Image Anal. 13 (2009) 859–870. [6] H. Narasimha-Iyer, A. Can, B. Roysam, C.V. Stewart, H.L. Tanenbaum, A. Majerovics, H. Singh, Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy, IEEE Trans. Biomed. Eng. 53 (2006) 1084–1098. [7] A.V. Sagar, S. Balasubramanian, V. Chandrasekaran, Automatic detection of anatomical structures in digital

c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 1 1 7 ( 2 0 1 4 ) 92–103

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

fundus retinal images, in: IAPR Conference on Machine Vision Applications, Tokyo, Japan, 2007, pp. 483–486. P.C. Siddalingaswamy, G.K. Prabhu, Automated detection of anatomical structures in retinal images, in: International Conference on Computational Intelligence and Multimedia Applications, 2007, pp. 164–168. S. Sekhar, W. Al-Nuaimy, A. Nandi, Automated localisation of optic disk and fovea in retinal fundus images, in: 16th European Signal Processing Conference (EUSIPCO-2008), Lausanne, Switzerland, 2008. B. Zhang, F. Karray, Optic disc and fovea detection via multi-scale matched filters and a vessels’ directional matched filter, in: International Conference on Autonomous and Intelligent Systems (AIS), 2010, pp. 1–5. D. Welfer, J. Scharcanski, D.R. Marinho, Fovea center detection based on the retina anatomy and mathematical morphology, Comput. Methods Programs Biomed. 104 (2011) 397–409. H. Li, O. Chutatape, Automated feature extraction in color retinal images by a model based approach, IEEE Trans. Biomed. Eng. 51 (2004) 246–254. K. Tobin, E. Chaum, V. Govindasamy, T. Karnowski, Detection of anatomic structures in human retinal imagery, IEEE Trans. Med. Imaging 26 (2007) 1729–1739. A.D. Fleming, K.A. Goatman, S. Philip, J.A. Olson, P.F. Sharp, Automatic detection of retinal anatomy to assist diabetic retinopathy screening, Phys. Med. Biol. 52 (2007) 331–345. H. Ying, J.C. Liu, Automated localization of macula-fovea area on retina images using blood vessel network topology,

[16]

[17]

[18]

[19] [20] [21] [22] [23]

[24]

103

in: IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2010, pp. 650–653. S. Samanta, S.K. Saha, B. Chanda, A simple and fast algorithm to detect the fovea region in fundus retinal image, in: Second International Conference on Emerging Applications of Information Technology (EAIT), 2011, pp. 206–209. C. Sinthanayothin, J. Boyce, H. Cook, T. Williamson, Automated localisation of the optic disc, fovea and retinal blood vessels from digital colour fundus images, Br. J. Ophthalmol. 83 (1999) 902–910. M.J. Cree, J.A. Olson, K.C. McHardy, P.F. Sharp, J.V. Forrester, A fully automated comparative microaneurysm digital detection system, Eye 11 (1997) 622–628. DIARETDB0 database available at: http://www2.it.lut.fi/project/imageret/diaretdb0/ DIARETDB1 database available at: http://www2.it.lut.fi/project/imageret/diaretdb1/index.html MESSIDOR database available at: http://messidor.crihan.fr/index-en.php R.C. Gonzalez, R.E. Woods, Digital Image Processing, Prentice Hall, Upper Saddle River, NJ, 2002, pp. 295–302. E.W. Weisstein, “Gaussian Function”. From MathWorld – A Wolfram Web Resource. http://mathworld.wolfram.com/GaussianFunction.html M.D. Saleh, C. Eswaran, An automated decision-support system for non-proliferative diabetic retinopathy disease based on MAs and HAs detection, Comput. Methods Programs Biomed. 108 (2012) 186–196.

Automated detection of fovea in fundus images based on vessel-free zone and adaptive Gaussian template.

This study developed a computerised method for fovea centre detection in fundus images. In the method, the centre of the optic disc was localised firs...
3MB Sizes 2 Downloads 3 Views