Semi-automatic geographic atrophy segmentation for SD-OCT images Qiang Chen,1,2,* Luis de Sisternes,2 Theodore Leng,3 Luoluo Zheng,3 Lauren Kutzscher,3 and Daniel L. Rubin2,4 1

School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China 2 Department of Radiology and Medicine (Biomedical Informatics Research), Stanford University, Stanford, CA 94305, USA 3 Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA 94303, USA 4 [email protected] * [email protected]

Abstract: Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in advanced stages of non-exudative age-related macular degeneration (AMD) and can lead to vision loss. We present a semiautomated GA segmentation algorithm for spectral-domain optical coherence tomography (SD-OCT) images. The method first identifies and segments a surface between the RPE and the choroid to generate retinal projection images in which the projection region is restricted to a subvolume of the retina where the presence of GA can be identified. Subsequently, a geometric active contour model is employed to automatically detect and segment the extent of GA in the projection images. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to qualitatively and quantitatively evaluate the proposed GA segmentation method. Experimental results suggest that the proposed algorithm can achieve high segmentation accuracy. The mean GA overlap ratios between our proposed method and outlines drawn in the SD-OCT scans, our method and outlines drawn in the fundus auto-fluorescence (FAF) images, and the commercial software (Carl Zeiss Meditec proprietary software, Cirrus version 6.0) and outlines drawn in FAF images were 72.60%, 65.88% and 59.83%, respectively. ©2013 Optical Society of America OCIS codes: (100.0100) Image processing; (110.4500) Optical coherence tomography; (170.4470) Ophthalmology.

References and links 1. 2. 3. 4. 5. 6.

J. J. Wang, E. Rochtchina, A. J. Lee, E. M. Chia, W. Smith, R. G. Cumming, and P. Mitchell, “Ten-year incidence and progression of age-related maculopathy: the blue Mountains Eye Study,” Ophthalmology 114(1), 92–98 (2007). R. Klein, B. E. Klein, M. D. Knudtson, S. M. Meuer, M. Swift, and R. E. Gangnon, “Fifteen-year cumulative incidence of age-related macular degeneration: the Beaver Dam Eye Study,” Ophthalmology 114(2), 253–262 (2007). H. Buch, N. V. Nielsen, T. Vinding, G. B. Jensen, J. U. Prause, and M. la Cour, “14-year incidence, progression, and visual morbidity of age-related maculopathy: the Copenhagen City Eye Study,” Ophthalmology 112(5), 787–798 (2005). P. Maguire and A. K. Vine, “Geographic atrophy of the retinal pigment epithelium,” Am. J. Ophthalmol. 102(5), 621–625 (1986). H. Schatz and H. R. McDonald, “Atrophic macular degeneration. Rate of spread of geographic atrophy and visual loss,” Ophthalmology 96(10), 1541–1551 (1989). J. S. Sunness, “The natural history of geographic atrophy, the advanced atrophic form of age-related macular degeneration,” Mol. Vis. 5, 25 (1999).

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2729

7. 8. 9. 10. 11.

12.

13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30.

R. G. Sayegh, C. Simader, U. Scheschy, A. Montuoro, C. Kiss, S. Sacu, D. P. Kreil, C. Prünte, and U. SchmidtErfurth, “A systematic comparison of spectral-domain optical coherence tomography and fundus autofluorescence in patients with geographic atrophy,” Ophthalmology 118(9), 1844–1851 (2011). R. Klein, B. E. Klein, and T. Franke, “The relationship of cardiovascular disease and its risk factors to agerelated maculopathy. The Beaver Dam Eye Study,” Ophthalmology 100(3), 406–414 (1993). J. R. Vingerling, I. Dielemans, A. Hofman, D. E. Grobbee, M. Hijmering, C. F. Kramer, and P. T. de Jong, “The prevalence of age-related maculopathy in the Rotterdam Study,” Ophthalmology 102(2), 205–210 (1995). H. Hirvelä, H. Luukinen, E. Läärä, and L. Laatikainen, “Risk factors of age-related maculopathy in a population 70 years of age or older,” Ophthalmology 103(6), 871–877 (1996). A. Bindewald, A. C. Bird, S. S. Dandekar, J. Dolar-Szczasny, J. Dreyhaupt, F. W. Fitzke, W. Einbock, F. G. Holz, J. J. Jorzik, C. Keilhauer, N. Lois, J. Mlynski, D. Pauleikhoff, G. Staurenghi, and S. Wolf, “Classification of fundus autofluorescence patterns in early age-related macular disease,” Invest. Ophthalmol. Vis. Sci. 46(9), 3309–3314 (2005). S. Schmitz-Valckenberg, C. K. Brinkmann, F. Alten, P. Herrmann, N. K. Stratmann, A. P. Göbel, M. Fleckenstein, M. Diller, G. J. Jaffe, and F. G. Holz, “Semiautomated image processing method for identification and quantification of geographic atrophy in age-related macular degeneration,” Invest. Ophthalmol. Vis. Sci. 52(10), 7640–7646 (2011). A. Deckert, S. Schmitz-Valckenberg, J. Jorzik, A. Bindewald, F. G. Holz, and U. Mansmann, “Automated analysis of digital fundus autofluorescence images of geographic atrophy in advanced age-related macular degeneration using confocal scanning laser ophthalmoscopy (cSLO),” BMC Ophthalmol. 5(1), 8 (2005). N. Lee, A. F. Laine, I. Barbazetto, M. Busuoic, and R. Smith, “Level set segmentation of geographic atrophy in macular autofluorescence images,” Invest. Ophthalmol. Vis. Sci. 47, (2006). N. Lee, A. F. Laine, and R. T. Smith, “A hybrid segmentation approach for geographic atrophy in fundus autofluorescence images for diagnosis of age-related macular degeneration,” Proceeding of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (2007), pp. 4965–4968. N. Lee, R. T. Smith, and A. F. Laine, “Interactive segmentation for geographic atrophy in retinal fundus images,” The 42nd Asilomar Conference on Signals, Systems and Computers (2008), 42, pp. 655–658. U. E. Wolf-Schnurrbusch, V. Enzmann, C. K. Brinkmann, and S. Wolf, “Morphologic changes in patients with geographic atrophy assessed with a novel spectral OCT-SLO combination,” Invest. Ophthalmol. Vis. Sci. 49(7), 3095–3099 (2008). S. Bearelly, F. Y. Chau, A. Koreishi, S. S. Stinnett, J. A. Izatt, and C. A. Toth, “Spectral domain optical coherence tomography imaging of geographic atrophy margins,” Ophthalmology 116(9), 1762–1769 (2009). M. Brar, I. Kozak, L. Cheng, D. U. Bartsch, R. Yuson, N. Nigam, S. F. Oster, F. Mojana, and W. R. Freeman, “Correlation between spectral-domain optical coherence tomography and fundus autofluorescence at the margins of geographic atrophy,” Am. J. Ophthalmol. 148(3), 439–444, 444.e1 (2009). S. Schmitz-Valckenberg, M. Fleckenstein, H. M. Helb, P. Charbel Issa, H. P. Scholl, and F. G. Holz, “In vivo imaging of foveal sparing in geographic atrophy secondary to age-related macular degeneration,” Invest. Ophthalmol. Vis. Sci. 50(8), 3915–3921 (2009). W. R. Green and S. N. Key 3rd, “Senile macular degeneration: a histopathologic study,” Trans. Am. Ophthalmol. Soc. 75, 180–254 (1977). R. G. Sayegh, C. Simader, U. Scheschy, A. Montuoro, C. Kiss, S. Sacu, D. P. Kreil, C. Prünte, and U. SchmidtErfurth, “A systematic comparison of spectral-domain optical coherence tomography and fundus autofluorescence in patients with geographic atrophy,” Ophthalmology 118(9), 1844–1851 (2011). S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest. Ophthalmol. Vis. Sci. 53(1), 53–61 (2012). Z. Yehoshua, C. A. A. Garcia Filho, F. M. Penha, G. Gregori, P. F. Stetson, W. J. Feuer, and P. J. Rosenfeld, “Comparison of geographic atrophy measurements from the OCT fundus image and the sub-RPE slab image,” Ophthalmic Surg Lasers Imaging Retina 44(2), 127–132 (2013). C. Schütze, C. Ahlers, S. Sacu, G. Mylonas, R. Sayegh, I. Golbaz, G. Matt, G. Stock, and U. Schmidt-Erfurth, “Performance of OCT segmentation procedures to assess morphology and extension in geographic atrophy,” Acta Ophthalmol. (Copenh.) 89(3), 235–240 (2011). S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18(18), 19413–19428 (2010). S. Lu, C. Y. L. Cheung, J. Liu, J. H. Lim, C. K. S. Leung, and T. Y. Wong, “Automated layer segmentation of optical coherence tomography images,” IEEE Trans. Biomed. Eng. 57(10), 2605–2608 (2010). M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging 28(9), 1436–1447 (2009). Q. Chen, T. Leng, L. L. Zheng, L. Kutzscher, J. Ma, L. de Sisternes, and D. L. Rubin, “Automated drusen segmentation and quantification in SD-OCT images,” Med. Image Anal. 17(8), 1058–1072 (2013). C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” Proceedings of the International Conference on Computer Vision (ICCV) (1998), pp. 839–846.

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2730

31. S. Jiao, R. Knighton, X. Huang, G. Gregori, and C. Puliafito, “Simultaneous acquisition of sectional and fundus ophthalmic images with spectral-domain optical coherence tomography,” Opt. Express 13(2), 444–452 (2005). 32. V. Caselles, F. Catte, T. Coll, and F. Dibos, “A geometric model for active contours in image processing,” Numer. Math. 66, 1–31 (1993). 33. R. Malladi, J. A. Sethian, and B. C. Vemuri, “Shape modeling with front propagation: a level set approach,” IEEE Trans. Pattern Anal. Mach. Intell. 17(2), 158–175 (1995). 34. C. Li, C. Xu, C. Gui, and M. D. Fox, “Level set evolution without re-initialization: a new variational formulation,” In Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2005) 1, pp. 430–436. 35. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern. 9(1), 62– 66 (1979). 36. D. Adalsteinsson and J. A. Sethian, “A fast level set method for propagating interfaces,” J. Comput. Phys. 118(2), 269–277 (1995). 37. S. Farsiu, S. J. Chiu, R. V. O’Cornell, F. A. Folgar, E. Yuan, J. A. Izatt, and C. A. Toth, “Quantitative classification of Eyes with and without intermediate age-related macular degeneration using optical coherence tomography,” Ophthalmology in press. 38. R. P. Nunes, G. Gregori, Z. Yehoshua, P. F. Stetson, W. Feuer, A. A. Moshfeghi, and P. J. Rosenfeld, “Predicting the progression of geographic atrophy in age-related macular degeneration with SD-Oct en face imaging of the outer retina,” Ophthalmic Surg. Lasers Imaging Retina 44(4), 344–359 (2013).

1. Introduction Age-related macular degeneration (AMD) is a leading cause of irreversible vision loss among the elderly. The advanced form of AMD is associated with severe vision loss, and it is characterized by the development of two major abnormalities: (1) macular neovascularization and/or (2) geographic atrophy (GA) [1–3]. The natural course of GA, unlike the neovascular form of AMD, progresses slowly and usually involves visual loss, primarily due to RPE degeneration and neurosensory retinal atrophy resulting in an absolute or relative scotoma affecting the central vision field [4–7]. Identification, measurement and evaluation of the amount of GA over time through imaging as progression of GA are often related to visual loss. In the United States, GA is present in 3.5% of people aged ≥75 years, and its prevalence rises to ≥22% in those >90 years old [8–10]. Several semiautomatic and automatic GA segmentation methods [11–16] have been proposed for FAF images. A region-growing method was proposed by Deckert et al. [13], where separate GA regions needed to be manually seeded to be included in the segmentation. N. Lee et al. adopted a level set model [14], and proposed a hybrid approach by identifying hypo-fluorescence GA regions from other interfering vessel structures in the FAF images [15] and an interactive segmentation approach by using the watershed transform algorithm [16]. Their method produced accurate segmentation results when evaluating pixel-by-pixel mean sensitivity and specificity, resulting in 98.3% and 97.7% respectively. SD-OCT is a high-resolution imaging modality for assessing the retina and can be useful for visualizing GA. Several studies have compared FAF and SD-OCT with a specific focus on morphologic features [17–20]. Since the aberration and loss of neurosensory elements at the inner retina can be relate to the visual prognosis in dry AMD [21], longitudinal SD-OCT imaging in patients with GA is important if these changes are to be followed and evaluated over time [7]. Sayegh et al. [22] evaluated SD-OCT for grading GA compared with FAF images, and concluded that SD-OCT is an appropriate imaging modality for evaluating the extent of GA lesions. To our knowledge, very few methods have been reported in the literature focusing on the segmentation of GA in SD-OCT. Most of the prior work focuses on detecting thinning of RPE, which is one (but not the only) feature of GA in SD-OCT images. Chiu et al. [23] used graph theory and dynamic programming to segment retina layers in eyes with GA and drusen (small deposits between the retinal pigment epithelium (RPE) and Bruch’s membrane). Their method performed well in computing the retina layer thickness for eyes containing GA, however, detecting and segmenting the extent of GA was not the focus of this work. Other

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2731

available methods for automated GA segmentation are proprietary in commercial systems [24], and the details of these methods and results of formal evaluations are not publicly available. An interesting analysis by Schütze et al. [25] suggests that the current available automated segmentation methods are limited in their ability to accurately assess retinal layer thickness and are thus not accurate in detecting GA. They compared retinal layer thickness deduced from automated segmentation in common commercially-available SD-OCT devices and proprietary software, compared with GA areas identified by experts who evaluated GA in fundus and FAF images. Their results showed that while segmentation of retinal layers was fairly successful, automated retinal thickness measurement in SD-OCT does not seem to be good at recognizing GA and defining its extent; the thickness differences observed between eyes presenting GA with control eyes were not statistically significant. They concluded that current automated SD-OCT methods do not identify the extent of RPE atrophy accurately and they are limited in their ability to differentiate discrete zones of RPE atrophy in early disease or complicated patterns. At present, if quantitative assessment of GA in SD-OCT images is desired, it needs to be performed by an expert who manually circumscribes the GA lesions in the B-scan images (the primary output from an SD-OCT device, comprising 2D contiguous slices through a volumetric cube of the retina), and subsequently projecting the segmentations onto an en face image to show the extent of GA across the retinal surface—a similar view to that seen in FAF images. Each SD-OCT volumetric image data set generally contains 128 or 200 B-scan images (for CirrusOCT (Carl Zeiss Meditec, Inc., Dublin, CA)). Since this manual circumscription of GA lesions in the B-scans is very time-consuming, it is not routinely performed in clinical practice. This paper presents a novel semi-automated GA segmentation algorithm for SD-OCT images and results of an evaluation of the accuracy of the method. Our approach to the problem starts with the computation of a sub-volume of the retina from the three-dimensional (3D) SD-OCT data set which enhances detection of GA (rather than simply segmenting and evaluating just the RPE). In addition, we generate a two-dimensional (2D) en face projection of the retina from that sub-volume, similar in appearance to FAF images, in order to visualize the extent of GA. Finally, we segment the en face projection to quantify the extent of GA. 2. Methods 2.1 Overview of the method Figure 1 shows the flowchart of the algorithm, which comprises three steps: (1) For each Bscan, the RPE layer is segmented automatically, and from this, a sub-volume of the retina in the SD-OCT cube is extracted which facilitates generating a projection image with minimal noise where possible GA lesions reside. This sub-volume is restricted to a region beneath the RPE layer containing the choroid, which is the site where abnormal high reflections due to the presence of GA and RPE thinning can be observed in OCT images. (2) An en face GA projection image is generated from the SD-OCT image sub-volume. (3) A geometric active contour model is adopted to segment GA detected in this projection image, and this contour is used to calculate the area (extent) of GA lesions.

Fig. 1. Flowchart of the proposed algorithm

To illustrate the intuition behind our method, Fig. 2 shows a single B-scan obtained from a SD-OCT image volume in a GA patient displaying a cross section of the retina layers in the macula. In GA, the RPE atrophies (and may or may not show thinning). In regions of RPE

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2732

atrophy, the RPE absorbs less light, thus transmitting more light beyond it (into the choroid layer beneath the RPE). The increased transmission of light into the choroid is manifest in SD-OCT images as regions of increased reflectivity (bright areas). Thus, in areas of GA on Bscans, there are regions of bright pixels beneath the RPE in the B-scan (Fig. 2). Our method detects these regions of increased reflectivity in the choroid layer by generating and analyzing a sub-volume that includes it. We also propose an active contour model as a GA segmentation tool on en face projection images with enhanced GA visualization, which are generated from the three dimensional SD-OCT sub-volume data. These planar images are constructed by projecting those voxels contained in a restricted volume within the choroid region (where bright pixels manifest the presence of GA) in the axial direction along each A-scan.

Fig. 2. B-scan from SD-OCT volume scans of the retina. The RPE layer and GA region are marked with red and blue lines, respectively. The presence of GA appears as bright pixels in choroid coat (the region underneath the RPE layer) due to the loss of the RPE layer and subsequent increased reflections from the underlying choroid.

2.2 RPE layer segmentation Several automatic segmentation methods of SD-OCT retinal layers have been presented in previous literature [26–28]. These methods use the knowledge of normal retinal layers, and do not consider the possible presence of GA. Thus, they are not ideal for the segmentation of the RPE layers containing GA and tend to fail when GA in present. We adopted a simplified RPE segmentation method [29] that takes into account the possible presence of GA. As a first step, the SD-OCT retinal images are smoothed with bilateral filtering [30]. Next, the location of the retinal nerve fiber layer (RNFL, shown in Fig. 2) is estimated by detecting the upper vitreous region. The purpose of detecting the RNFL layer is to facilitate segmenting the RPE layer. The reflectivity of the vitreous region is usually similar throughout the B-scans in an OCT cube (Fig. 2), and thus a constant threshold can be used to extract this background region, which helps identify the contour of the surface of the RNFL. The bottom boundary of the vitreous region is taken as the location of the inner limiting membrane and inner boundary of the RNFL layer, as shown in Fig. 2. The RPE layer can be identified in SD-OCT retinal images by its bright pixel values, as shown in Fig. 2 and Fig. 3. Thus, intensity-based methods can be useful to extract it. In

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2733

addition, the healthy RPE has an approximately constant thickness (20μm). Based on this information and the histogram statistics of the image pixels underneath the segmented RNFL (outside vitreous region), a threshold can be determined, which separates the bright RPE region from the darker background to produce a binary image forming an initial RPE estimation. A narrow band with a radius 20μm (determined by approximate mean RPE thickness) is generated and the regions in the initial estimation not connected with this band are later removed. The RPE layer segmentation is then further refined by removing small selected regions (regions containing less than 150 pixels). To ensure that the RPE is a continuous linear structure, missing pixels between selected regions are also interpolated. Finally, the middle axis of the resulting RPE segmentation is computed for each A-scan (i.e., the individual axial lines forming a B-scan) which produces the final RPE segmentation. Further details are explained in [29]. 2.3 Generation of GA projection image A common method for creating a 2D projection from SD-OCT volumetric data sets is the summed-voxel projection (SVP) [31], in which all the voxel values in the 3D data are summed along the axial A-scan lines in the B-scans, producing an image showing the retinal surface en face, similar to color fundus photographs (CFPs) and FAF images. However, the en face SVP fundus image is not ideal for GA visualization due to the confounding influence of highly reflective retinal layers above and below GA lesions in the retina (in particular the RNFL and RPE layers) which obscure GA lesions. The commercial software on the Cirrus HD-OCT (version 6.0) provides a sub-RPE slab function [24]. The sub-RPE slab is formed by axially projecting only the OCT image data from a region below the contour of the RPE fit. Our projection method, which is also derived by restricting the sum of the voxel values to the sub-volume beneath the segmented RPE layer, where the choroid resides and where the high reflections indicating GA will be seen, improves the traditional SVP image in terms of GA visualization. The lower boundary of the sub-volume is parallel to the top boundary (RPE layer), where the parallel distance is equal to the minimum distance between the end of the cube and the segmented RPE layer. Then, the average intensity of the sub-volume in the axial direction is taken as the intensity value of the GA projection image. We call this the restricted summed-voxel projection (RSVP). Figure 3 shows an example, comparing the traditional SVP projection [31] and the RSVP projection in a patient with GA. Figure 3 shows that the contrast of GA in the RSVP image is higher than in the SVP image, which can improve the performance of a computerized GA segmentation method. On the other hand, this process can introduce aberrant bright signals (e.g., the bright spots near the upper blood vessels in Fig. 3), caused by an inaccurate RPE layer segmentation. In practice, this did not negatively impact our results (see the evaluation of our GA segmentation method below).

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2734

Fig. 3. (a) and (c): SVP and RSVP projection for visualizing GA lesions. The red lines correspond to the cross section of retina visualized in the B-scan shown (b). The top boundary and the lower boundary of the projection sub-volume are marked with two parallel yellow dash lines in (b).

2.4 GA segmentation based on geometric active contour model To derive the shape and size of GA lesions, we employed a geometric active contour model for the segmentation of the GA lesions on the RSVP images. These images were denoised using bilateral filtering [30] as a preliminary step to reduce the influence of noise on the segmentations. Geometric active contour (GAC) models were simultaneously proposed by Caselles et al. [32] and by Malladi et al. [33], introduced as an alternative to parametric deformable models and as a way to overcome their limitations. GAC models are based on the theory of curve evolution and geometric flows, and implemented using the level-sets based numerical algorithm. The basic idea of these models is to transform a planar curve movement track into a three-dimensional curved surface movement track, which has the advantage of being able to handle the change of topological structure easily. During the evolution of traditional level set methods, re-initialization is necessary to keep the evolving level set function close to a signed distance function. In order to eliminate the need of the costly reinitialization procedure, Li et al. [34] presented a new formulation that forces the level set function to be close to a signed distance function. The formulation proposed is E (φ ) = μ P (φ ) + E m ( φ ) ,

(1)

where P (φ ) =

2 1 ∇φ − 1) dxdy. (  Ω 2

(2)

The internal energy P (φ ) characterizes how close a level set function φ is to a signed distance function in the image domain Ω ⊂ ℜ2 , which penalizes the deviation of φ from a signed distance function during its evolution. μ > 0 is a parameter controlling the effect of the internal energy. The external energy Em (φ ) drives the zero level set toward the object boundaries, which is defined as Em (φ ) = λ Lg (φ ) + ν Ag (φ ) = λ  g ( ∇I ) δ (φ ) ∇φ dxdy + ν  g ( ∇I ) H ( −φ ) dxdy , (3) Ω

#195911 - $15.00 USD

Ω

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2735

where λ > 0 and ν are constants, and δ ( x ) and H ( x ) are the univariate Dirac function and the Heaviside function, respectively. I ( x, y ) denotes the image pixel values, and the edge indicator function g is defined by g ( ∇I ) =

1 1 + ∇Gσ ∗ I

2

where Gσ is Gaussian kernel with

standard deviation σ . Solving the minimization problem by gradient descent results in the evolution equation   ∇φ    ∂φ ∇φ  = μ  Δφ − div    + λδ (φ ) div  g ( ∇I )  + ν g ( ∇φ ) δ (φ ) . ∂t ∇φ   ∇φ    

(4)

In numerical implementation, the regularized Dirac δ ε ( x ) is defined by

δε ( x ) =

1 ε , lim δ ( x ) = δ ( x ) , ⋅ π ε 2 + x 2 ε →0 ε

(5)

where ε is a constant, selected as ε = 1.5 in this work. The initial function φ is defined as −ρ ,  φ ( 0, x, y ) =  0,  ρ, 

( x, y ) ∈ ω ( x, y ) ∈ ∂ω , ( x, y ) ∈ Ω \ ω

(6)

where ω is a subset in the image domain Ω , and ∂ω is all the points on the boundaries of ω . ρ = 4 is a constant in this paper. The geometric active contour (4) is an edge-based model, where the edge indicator function depends on image gradients. In addition, the edge-based geometric active contour can only evolve in one direction (inward or outward), which requires the initial curve to surround (let in or keep out) the objects to be tracked. To reduce the number of iterations and allow the curve to evolve in any direction, two modifications were made in the traditional formulation, as follows: (1) The results obtained from the global binarization method [35] were taken as the initial function φ . Since the GA regions are generally brighter than background regions, as shown in Fig. 3(c), the initial object boundaries provided by this binarization are generally close to the real GA boundaries, which can reduce the evolution time. An alternative initialization may be used for those cases in which the results from the global binarization were far away from the real GA boundaries in order to speed up the algorithm. This alternative initialization consisted in the interactive specification of a polygonal region vaguely around the GA regions and then the binarization of that restricted polygonal region, which result is then taken as the initial function. (2) The geometric active contour was modified to allow the curve to evolve in any direction. The geometric active contour is traditionally defined to evolve either inward or outward the initial curve, which makes it unsuitable in our case since the initial object boundaries may be inside or outside GA contours. In order to make the curve evolve in any direction, we propose the sign of the coefficient ν of Ag to be adaptively determined according to the image gradient. This adaptive coefficient is then defined as

#195911 - $15.00 USD

ν ( ∇I , φ ) = φ − φI ,

(7)

φI ( x, y ) = φ ( x + rx, y + ry ) , rx, ry ∈ {−1,0,1},

(8)

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2736

where φI is a changed signed distance function of φ , and rx and ry denote the translations in the horizontal and vertical directions, respectively. The translations ( rx and ry ) depend on the image gradient ∇I . To further reduce the computational complexity, the evolution curve is restricted in a narrow band based on the narrow band level set method [36]. For the points outside this narrow band, φI equals to φ . For each point inside this narrow band, we search the maximum image gradient within its eight-neighbor directions. Figure 4 illustrates the construction of the adaptive coefficient. Figure 4(a) is a signed distance function φ generated with the curve shown in yellow. The sign of φ is negative inside and positive outside the region specified by the curve in blue, which represents the evolving curve. Figure 4(b) shows one pixel ‘O’ (shown in red) and the pixels in its eight-neighbor directions {U i }i =1 (shown in 8

{

}

gray). Let max ∇I io = max ∇I ( x, y ) ( x, y ) ∈U i , i = 18 , be the maximum image gradient

of each neighbor direction for pixel ‘O’, and max ∇I o = max {max ∇I io }

8 i =1

be the maximum

value of the eight-neighbor directions. Then, the translations for pixel ‘O’ can be determined by using the following rule:  rx = 0, ry = 0   rx = 0, ry = 1  rx = −1, ry = 1   rx = −1, ry = 0   rx = −1, ry = −1  rx = 0, ry = −1   rx = 1, ry = −1   rx = 1, ry = 0  rx = 1, ry = 1

max ∇I o ≥ ∇I o max ∇I o = max ∇I1o max ∇I o = max ∇I 2o max ∇I o = max ∇I 3o max ∇I o = max ∇I 4o , max ∇I o = max ∇I 5o max ∇I o = max ∇I 6o

(9)

max ∇I o = max ∇I 7o max ∇I o = max ∇I 8o

where ∇I o denotes the image gradient of pixel ‘O’. The relationship between the translations and the eight-neighbor directions is shown in Fig. 4(b). The radius considered for the gradient computation in each direction, represented by rU , is an arbitrary value within the interval

[1, rn ] , where

rn represents the radius of the narrow band. In Fig. 4(b), this radius was set up

to rU = 3 . The choice of this radius value affects the ranges in which we want to consider possible local evolution of the curve. When the initial object boundaries are very close to real GA boundaries, rU should be set to small value. If the initial object boundaries are far away from the GA boundaries, rU should be set to a larger value to prevent the evolving curve running into a local optimum. In the experiments presented in this paper this radius value was set to 1. By using the adaptive coefficient, the active contour will evolve in the direction of the maximum image gradient.

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2737

Fig. 4. Construction of the adaptive coefficient. (a) A signed distance function generated with the initial object contour (yellow curve). The blue curve represents the current evolving curve. (b) The evolving direction of each point in the narrow band depends on the maximum image gradient in its eight-neighbor directions (gray pixels) and itself (red pixel).

Based on the modifications above, the modified formulation of the adaptive geometric active contour employed in the segmentation is E (φ ) =

μ 2

 ( ∇φ Ω

− 1) dxdy +λ 2

 g ( ∇I ) δ (φ ) ∇φ dxdy +  ν ( ∇I , φ ) g ( ∇I ) H ( −φ ) dxdy , (10) Ω

Ω

and the evolution equation of Eq. (10)   ∇φ    ∂φ ∇φ  = μ  Δφ − div    + λδ (φ ) div  g ( ∇I )  + ν ( ∇I , φ ) g ( ∇φ ) δ (φ ) . (11) ∂t ∇φ    ∇φ   

The φ in the coefficient ν is the signed distance function of the previous iteration, not the current iteration. Thus, the coefficient ν is a constant matrix for the evolution Eq. (11). Our algorithm was implemented in Matlab. To discrete the evolution Eq. (11), we used finite differences implicit scheme. Let h be the ∂φ ∂φ and are space step, Δt be the time step, all the spatial partial derivatives ∂y ∂x ∂φ is approximated by the central difference, and the temporal partial derivative ∂t approximated by the forward difference n +1 n ∂φ φi +1, j − φi −1, j ∂φ φi , j +1 − φi , j −1 ∂φ φi , j − φi , j = = = ,φy = , φt = , ∂x ∂y ∂t Δt 2h 2h ∂ 2φ φ + φ − 2φi , j ∂ 2φ φ + φ − 2φi , j , φ yy = 2 = i , j +1 i , j2−1 . φxx = 2 = i +1, j i −21, j ∂x ∂y h h

φx =

 ∂φ ∂φ  So gradient operator is ∇φ =  ,  .  ∂x ∂y  The curvature κ is computed on the level-set of φ :

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2738

 ∇φ  ∇φ

κ = div 

 φxxφ y2 − 2φxφ yφxy + φ yyφx2 ,  = 3 2 2 2  φ φ + ( x y)

 ∇φ  ∂  φ x  ∂  φ y  div  g ( ∇I ) + g   = g ∇φ  ∂x  ∇φ  ∂y  ∇φ  

φx

= gx

φ +φ 2 x

2 y

+ gy

φy φ + φ y2 2 x

+ gκ .

Laplace operator is Δφ = φxx + φ yy = φi +1, j + φi −1, j + φi , j +1 + φi , j −1 − 4φi , j . Then, the approximation of evolution Eq. (11) by the above difference scheme can be simply written as

φin, +j 1 − φin, j Δt

= Ent (φin, j ) ,

(12)

where Ent (φin, j ) is the approximation of the right hand side in (11) by the above spatial difference scheme. The difference Eq. (12) can be written as:

φin, +j 1 = φin, j + Δt ⋅ E (φin, j ) .

(13)

0.1 , λ = 5, h = 1, ε = 1.5 . The Δt stopping criteria of the algorithm is the number of iterations, which is 20 in this paper. These values were derived heuristically, being derived by visual inspection of the results produced by the algorithm, tuning them until we obtained satisfactory qualitative results.

In our experiments, we have used the parameters Δt = 2, μ =

2.5 Experimental evaluation studies and results We used two different data sets to evaluate our algorithm, where all the cases presented with advanced non-neovascular AMD with extensive GA. The first data set consisted of 55 longitudinal SD-OCT cube scans from twelve eyes in eight patients with GA (acquired with the CirrusOCT device, Carl Zeiss Meditec, Inc., Dublin, CA.). Each cube consisted of 512 × 128 × 1024 voxels corresponding to a 6 × 6 × 2 mm3 volume centered at the macular region of the retina in the lateral, azimuthal and axial directions, respectively). The SD-OCT scans were segmented using our algorithm. The RSVP images from those SD-OCT scans were also segmented by hand twice by two different graders in separate reading sessions separated by several weeks to minimize recall of cases. We then performed a qualitative and quantitative comparison between the semi-automated segmentation results produced by our algorithm and the manual segmentation drawn by graders. We also established the variability observed in the manual segmentations drawn by different graders (inter-grader agreement) and by the same grader at different times (intra-grader agreement) in the RSVP images. The second data set consisted in 56 SD-OCT cube scans from 56 eyes in 56 patients with GA (acquired with the CirrusOCT device). Each cube consisted of 200 × 200 × 1024 voxels corresponding to a 6 × 6 × 2 mm3 volume centered at the macular region of the retina in the lateral, azimuthal and axial directions, respectively). These scans were segmented using our algorithm and also by using the Carl Zeiss Meditec proprietary software (Cirrus version 6.0). For this second eye data set we also had corresponding FAF images (acquired with Spectralis, Heidelberg Engineering, all over 8.56 × 8.56 mm area of retina but with image sizes either 496 × 496 pixels, 768 × 768 pixels or 1536 × 1536 pixels, due to post-processing). These FAF images were segmented by hand by one expert grader, and their outlines were compared to those produced by our segmentation method as well as those produced by the Cirrus software.

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2739

The segmentation results from our method and from the Cirrus software were compared with each other as well. In order to compare the GA segmentations drawn in FAF and in SD-OCT RSVP images, the RSVP images were computationally transformed into the FAF coordinate system by specifying an affine transformation determined by manual registration of the retinal blood vessels observed in the image pairs. An example of this correspondence can be observed in Fig. 12. We assumed that the agreement between segmented GA lesions in SDOCT images and in FAF images may not be perfect, since these are two very different imaging techniques, but it provides an idea of the correlation of the segmentation results with the method used for GA identification in current clinical practice. For the evaluation in the first data set, multiple SD-OCT images had been acquired in the eight patients over time, enabling us to visually assess the ability of our method to identify temporal changes in GA lesions and to indicate its potential utility to Ophthalmologists in evaluating changes in GA by visual inspection. We reviewed the images of the GA lesions in the first data set, noting change in the total GA area to detect progression of GA in each patient. We computed the total area of all GA lesions present in an eye at each time point for two of these patients and generated plots of GA area vs. time for these patients to demonstrate the potential of our method to provide a quantitative imaging biomarker for following GA disease progression. For a quantitative evaluation, we employed four different metrics to assess the differences between pairs of the segmentation methods (our method vs. each of the expert graders, each grader vs. the other and our method vs. commercial software): Absolute area difference (AAD), overlap ratio (OR), correlation coefficient (cc) and the Mann-Whitney U-test. The AAD measures the absolute difference between the GA area in two different segmentations of the same scan. Their mean and standard deviation values (std) are computed across the different scans available in the data set: AAD ( X; Y ) = std ( AAD )( X ; Y ) =

1 K

1 K



K k =1

Area ( X k ) − Area (Yk ) ,

(14)

 ( Area ( X ) − Area (Y ) − AAD ( X ;Y )) K

k

k =1

k

2

,

(15)

where X k and Yk indicate the regions inside the segmented GA contour of scan k, produced by the methods (or grader) X and Y , respectively. The OR is defined as the percentage of area in which both segmentation methods agree with respect to the presence of GA over the total area in which at least one of the methods detects GA. The mean OR and standard deviation values are computed across scans in the data set in the same way as for AAD: OR ( X ; Y ) =

std (OR )( X ; Y ) =

1 K

1 K



K k =1

X k ∩ Yk , X k ∪ Yk

(16) 2

 X ∩Y   k =1  X k ∪ Yk − OR ( X ;Y )  , k  k  K

(17)

where the operator ∪ indicates union and ∩ indicates intersection. The correlation coefficients were computed between the areas of GA segmented by our segmentation method and segmentations drawn by hand by expert graders (as well as between different graders and sessions), measuring the linear dependence using each scan as an observation. We used the Mann-Whitney U-test to measure the possible statistical differences in the area measured between our segmentations and those drawn by hand.

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2740

3. Results

3.1 Qualitative evaluation Figures 5 and 6 show the GA segmentation results for two different patients with GA from the first data set who had multiple SD-OCT studies over the course of their disease. The red lines in Fig. 7(a) and 7(d) show the quantitative area of total GA lesions plotted (corresponding to the segmented GA lesions in Fig. 5 and Fig. 6, respectively). The GA segmentation results (Figs. 5 and 6) appeared satisfactory by visual inspection. The property of the geometric active contour model ensures that the boundaries of the segmentation results are smooth and multiple separated GA regions can be simultaneously segmented. Figure 7 shows the GA area changes of the first data set, where the results of two experts and our method are given. From Fig. 7, it can be seen that the change tendency of GA areas for each eye is almost identical between our results and the two experts, and the GA areas of our method are generally smaller than those of the experts. For one patient (Fig. 7(j)), there is only one imaging time.

Fig. 5. GA segmentation results on RSVP images for the right eye of an 88 year old female patient. The imaging dates of (a)-(f) are 3/14/2008, 9/26/2008, 4/3/2009, 2/17/2010, 6/16/2010, 12/8/2010, respectively.

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2741

Fig. 6. GA segmentation results on RSVP images for the right eye of a 76 year old female patient. The imaging dates of (a)-(j) are 8/21/2008, 1/6/2010/, 4/7/2010, 7/13/2010, 8/17/2010, 9/14/2010, 10/12/2010, 11/15/2010, 12/20/2010, 1/24/2011, respectively.

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2742

Fig. 7. GA area vs. time for segmentation of GA lesions obtained from twelve eyes in eight patients. The results of two experts (blue and magenta lines) and our method (red line) are marked as shown in (a). Four patients have the results of the right/left eyes, namely (b)/(c), (e)/(f), (h)/(i) and (k)/(l). Other four patients only have the results of one eye, namely (a), (d), (g) and (j).

Figure 8 shows the worst segmentation result we observed (from patient shown in Fig. 5). Comparison with the segmentations drawn by an expert (Figs. 8(a) and 8(b)) shows that our method neglected small GA regions (such as the yellow dash region in Fig. 8(c)) and contained other small background regions (such as the white dash region in Fig. 8(c)). These segmentation errors were caused by the smoothing effect of the image denoising and the external energy of the geometric active contour model.

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2743

Fig. 8. The worst segmentation result of patient #1. (a) Expert 1. (b) Expert 2. (c) Our segmentation algorithm.

Figure 9 shows another example of a poor segmentation result (from patient shown in Fig. 6). The main segmentation errors are the right boundary and center part of the GA region, marked with yellow and white dash ovals in Fig. 9(c), respectively. These errors were produced by an inaccurate RPE layer estimation, resulting in the coarse right GA boundary. Figure 10 is the B-scan corresponding to the red dash line of Fig. 9. It can be seen from Fig. 10 that most of the RPE layer disappeared and the retinal layers’ structure is greatly changed due to the GA influence, which made the RPE estimation difficult and inaccurate in this case and led to segmentation errors.

Fig. 9. The worst segmentation result of patient #2. (a) Expert 1. (b) Expert 2. (c) Our segmentation algorithm.

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2744

Fig. 10. B-scan corresponding to the red dash line of Fig. 9.

3.2 Quantitative evaluation: our method vs. expert graders Since there is no absolute ground truth in GA segmentations obtained in-vivo, we evaluated the variability observed in the GA segmentations by different graders (inter-observer agreement) and by the same grader at different sessions (intra-observer agreement) in the SDOCT RSVP images using the first scan data set. Figure 3 indicates that the contrast for the visualization of GA in RSVP images is better than that in SVP images. The results from this evaluation are summarized in Table 1, where A1 represents the segmentations of the first grader in the first session, A2 is first grader in the second session, B1 is the second grader in the first session and B2 is the second grader in the second session. For the assessment of the differences between the segmentations done by different graders, the union of both sessions was considered for each grader: A1&2 and B1&2 represent the first and second grader respectively. The correlations observed in measured areas were very high for both segmentations drawn by different graders and by the graders at different times. This was also the case for the overlap ratios in GA areas. The areas of segmentations created by one grader at different sessions were slightly higher than for the other grader. The differences in measured areas were small as well, with a mean value under 5% difference for both graders and between them (percentages are with respect a mean value between both methods). The high p-values in the U-test indicate that there were no statistical differences found in the distribution of areas. These results indicate that measurements and quantification of GA in the RSVP images proved robust and repeatable compared with segmentations made by different graders or by the same grader at different times. Table 1. Within-expert and between-expert correlation coefficients (cc), paired U-test pvalues, absolute GA area differences and overlap ratio evaluation between the manual segmentations Methods compared

0.998

p-value (U-test) 0.658

AAD [mm2] (mean, std) 0.239 ± 0.210

AAD [%] (mean, std) 3.70 ± 2.97

OR [%] (mean, std) 93.29 ± 3.02

8 / 55

0.996

0.756

0.243 ± 0.412

3.34 ± 5.37

93.06 ± 5.79

8 / 110

0.995

0.522

0.314 ± 0.466

4.68 ± 5.70

91.28 ± 6.04

Expert A1- Expert A2

Number of eyes / cubes 8 / 55

Expert B1- Expert B2 Expert A1&2- Expert B1&2

#195911 - $15.00 USD

cc

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2745

We compared the segmentations produced by our algorithm with those drawn by the two graders in this first data set by creating an “average expert segmentation.” This was generated by averaging the four segmentations drawn by the two experts at the two separated sessions, computed as a two or more vote. Table 2 summarizes the results for this comparison. The correlations between the areas computed by our method and those produced by the average expert were high. The measured overlap ratios were lower in this case than when comparing segmentations drawn by individual graders, but still showed good agreement (around 72.6% with an average expert segmentation). Similarly, the measured area differences were higher when evaluating our segmentation algorithm against expert segmentations (Table 2, percentages are with respect the ground truth area, considered as the grader segmentation) than when evaluating manual segmentations against each other (Table 1). Nevertheless, visual inspection of the segmentation results (Fig. 5 and Fig. 6) showed that our segmentations appeared satisfactory. Table 2. Correlation coefficients (cc), paired U-test p-values, absolute GA area differences and overlap ratio between our segmentation method (Our Seg.) and expert segmentations

0.970

p-value (U-test) 0.026

AAD [mm2] (mean, std) 1.438 ± 1.26

AAD [%] (mean, std) 27.17 ± 22.06

OR [%] (mean, std) 72.60 ± 15.35

8 / 55

0.967

0.047

1.308 ± 1.28

25.23 ± 22.71

73.26 ± 15.61

Our Seg. - Expert A2

8 / 55

0.964

0.024

1.404 ± 1.31

26.14 ± 21.48

73.12 ± 15.15

Our Seg. - Expert B1

8 / 55

0.968

0.017

1.597 ± 1.33

29.21 ± 22.17

71.16 ± 15.42

Our Seg. - Expert B2

8 / 55

0.977

0.022

1.465 ± 1.14

27.62 ± 20.57

72.09 ± 14.82

Methods compared

Number of eyes / cubes 8 / 55

Our Seg. - Expert A1

Our Seg. - Avg. Expert

cc

3.3 Quantitative evaluation: our method vs. commercial method and FAF manual segmentations We used the second data set to compare the performance of our method with a commercial GA segmentation software system and expert segmentation drawn in FAF images, which is commonly used for GA assessment in clinical practice. Since SD-OCT and FAF are two very different techniques producing images of different sizes and extent, the pixel sizes and structures found in them needed to be adjusted and registered in order to establish a meaningful comparison. As described in the Methods section, the structure correspondence between these techniques was found by an affine transformation defined by manual registration of retinal blood vessels between the two imaging techniques in a case by case basis. Examples of this correspondence for five representative patients are displayed in Fig. 11, where RSVP image obtained from the SD-OCT scans is superimposed with the FAF images, and the manual segmentations drawn in FAF images are shown in red, the results obtained from our segmentation algorithm are in green, and the results obtained from the commercial software are in blue.

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2746

Fig. 11. FAF and RSVP composite images displaying the overlapping segmentations drawn by hand by an expert in the FAF images (red), our proposed algorithm (green) and commercial software (blue). The composite image was obtained by manual registration to align retinal blood vessels. (a) Patient 1. (b), Patient 2. (c), Patient 3. (d), Patient 4. (e), Patient 5.

Visual inspection of the 56 different eyes segmented indicated that our method appeared to produce better quality GA segmentations overall than the commercial software, and it even performed in a satisfactory manner in some of the cases where the commercial software seemed to fail. For example, in Fig. 11(c)-11(e) the commercial software clearly overestimated the area of GA, segmenting independent areas where clearly no sign of GA can be observed. In Fig. 11(e), this overestimation involved the optic disk, as it was partly present in the OCT scan. We observed these large overestimation errors in the segmentations with the commercial software in 19 out of the 56 cases, whereas our method did not produce such large overestimation in any of the cases. A quantitative comparison of the segmentations produced by our algorithm, the commercial software, and the expert segmentations in the FAF images is summarized in Table 3. The correlation coefficient between areas measured using our algorithm and the expert segmentations in FAF images was very high (0.955), much higher than that observed between the commercial software segmentation results and expert segmentations (0.807) and between the two segmentation algorithms (0.820). All p-values for the correlation coefficients were statistically significant (p0.05). The mean overlap ratio (Table 3) was higher between our method and the manual segmentations in FAF images than between Cirrus and manual segmentations. The differences between the measured overlap ratios in Table 3 were high, although not statistically significant (analysis of variance (ANOVA) resulted in p = 0.062). The overlap ration between our method and manual segmentations was lower than in the previous data set (Table 2), most probably due to the intrinsic differences between SD-OCT and FAF images and possible bias introduced by the registration process. Table 3. Correlation coefficients (cc), paired Mann-Whitney U-test p-values, absolute differences and overlap ratio in areas of GA between our segmentation method (Our Seg.), commercial software segmentation (Com. Sw. Seg.), and expert segmentations in FAF images Methods compared

0.955

p-value (U-test) 0.524

AAD [mm2] (mean, std) 0.951 ± 1.28

AAD [%] (mean, std) 19.68 ± 22.75

OR [%] (mean, std) 65.88 ± 18.38

56 / 56

0.807

0.140

1.796 ± 2.51

34.13 ± 38.62

59.33 ± 22.48

56 / 56

0.820

0.448

1.746 ± 2.34

32.85 ± 42.41

59.33 ± 22.48

Number of eyes / cubes 56 / 56

Com. Sw. Seg. - FAF Our Seg. - Com. Sw. Seg.

Our Seg. - FAF

cc

4. Discussion

Our work provides improved en face visualization technique of GA in SD-OCT images and a new semi-automated GA segmentation method. Our techniques demonstrated good performance overall, both in qualitative visual assessment and when compared with manual segmentations created by experienced graders. One advantage of developing GA segmentation and quantification techniques in SD-OCT images over other modalities, like FAF images, is its ability for depth-resolved imaging. While multiple GA segmentation methods have been described for FAF, their images are the results of depth-integration, superimposing different retinal structures which may result in masking pathologies or retinal features. A substantial challenge in GA quantification is that, even when segmentations are drawn by hand, there is variability in the results due to this masking effect. Our proposed methods take advantage of depth information available in OCT images by producing an en face image derived from the projection of a sub-volume containing the choroid region (the RSVP image), which is the site where abnormal high reflections due to the presence of GA and RPE thinning can be observed in OCT images. This results in images in which the contrast of GA visualization is enhanced. The low inter-grader and intra-grader variability in RSVP images shown in Table 1 demonstrates the utility of these images for GA visualization. However, there are also some limitations in our methods. An erroneous segmentation of the RPE layer may produce artifacts in the RSVP images, like those observed in Fig. 2(c). Though we observed that such artifacts produced little or no effect in GA visualization and GA segmentation, they are nonetheless challenges that could have clinical impact and should be addressed in future work. Another potential limitation is that our segmentation technique considers a two-dimensional segmentation of the RPE layer, that is, B-scan by B-scan. Numerous recent publications have pointed out the improvements in retinal layer segmentation by taking a three dimensional approach, considering information from adjacent B-scans. In this work, our focus was in the use of a geometric active contour model for segmentation of GA in SD-OCT images. The precision of RPE segmentation certainly influences the quality of the RSVP images and a better and more precise segmentation method could be adopted in further development of our methods, though not using three-

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2748

dimensional segmentation does not diminish the significance of our method and its results segmenting GA. Nevertheless, our current work includes a novel three-dimensional multiple layer segmentation method and the evaluation of our GA segmentation technique using these layer segmentations, which will be presented in the future. Another limitation of our method is that the segmentation technique relies on numerous thresholds determined by the histogram of values of each B-scan. Although such thresholds are determined automatically and directly from the sample to be segmented and they produced appropriate results in the cases included in our analysis without our needing to vary any parameters between cases, they may not be optimal for OCT cubes obtained from different commercial systems and vendors. In the future, we will test our methods in a larger range of SD-OCT cubes obtained from different clinical systems to evaluate the extensibility of our approach. In addition, we will expand our current automated RPE layer segmentation to adopt a three-dimensional algorithm using information from adjacent B-scans, which may further improve the resulting RSVP images and GA segmentation. Apart from the mentioned artifacts in the RSVP images, our method produced satisfactory GA segmentations when compared to manual segmentations drawn by expert graders. One of the difficulties when performing a meaningful quantitative evaluation of our algorithm is the lack of a detailed performance evaluation of other similar methods in SD-OCT against which to compare. While automated GA segmentation in OCT is a very active area of research, most of the methods developed to date reside in commercial systems, the details of their methods have not been published, and it is difficult to reproduce or compare those methods in comparison evaluation studies. To our knowledge, the only published evaluation of several GA segmentation methods in OCT based its evaluation on the differences observed among areas of retinal thinning [25]. The results were not optimal due to the lack of correlation between those segmented areas of retinal thinning and actual GA size as identified in scanning laser ophthalmoscope and fundus photography, and also showed substantial limitations in identifying zones of GA reliably when using automatic segmentation procedures in current SD-OCT devices [25]. Nevertheless, recent articles have provided evidence of differences in retinal thickness in eyes presenting intermediate AMD stages, including eyes presenting GA, and healthy controls. A recent article from Farsiu et al. [37] defined possible areas of GA as those with a RPE complex thickness smaller than 3 standard deviations from the mean presented in control subjects when observed using SD-OCT. Although the performance of determining areas of GA employing such method was not evaluated in the study, such definition seemed improve the differentiation between AMD and control subjects when used in concordance with other characteristics evaluated via SD-OCT, such as abnormal thickness areas in the RPE complex due to drusen and total retinal and RPE complex volume. A more similar approach to our work presented here in terms of extent of GA definition was described in a recent article by Nunes et al. [38]. In this article, areas of GA were defined by manually outlining “brighter” regions observed when projecting a thin slab of the SD-OCT cube onto an en face image. Such slab was formed by the image data from a region extending 65 µm to 400 µm below the RPE. Additional observation of en face images produced in another thin slab around the IS/OS junction (20-µm thick slab representing the region located between 20 µm to 40 µm above the RPE) in a longitudinal study showed potential of areas presenting thinning in the IS/OS region being predictors of the areas where GA is likely to progress within one year. The study indicates that although loss of outer retinal thickness appears to be a predictor of future GA appearance, it does not perfectly correspond to the current extent of GA. Moreover, the patterns of outer retinal disruption extending beyond the borders of GA accurately predicted progression within one year in 13 of 30 eyes (43.3%) while it was much larger in the rest of the cases. We undertook a performance evaluation of our method as well as a comparison with known commercially available GA segmentation software (Cirrus), and compared both results with hand-drawn segmentations. It is interesting to note that among the evaluated

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2749

segmentation methods in the previously mentioned publication [25] the Cirrus software seemed to perform the best, so we selected it as the commercial software to compare to our method. These results are summarized in Tables 2 and 3. The segmentations produced by our method presented high area correlation and overlap ratio as well as relatively low area differences with the manual segmentations. Our method also produced segmentation results that were more similar to the manual segmentation drawn by an expert grader than those produced by the commercial software we evaluated. We hope that our quantitative results and measurements can also serve as benchmarks to enable comparing performance in future GA segmentation methods in SD-OCT. Quantifying GA over time is extremely important, as disease progression is directly related to objective changes in GA, such as total area. We believe that our RSVP technique to enhance visualization of GA and our segmentation method can provide a better estimation of GA extent as well as reduce the time burden and labor costs of manual segmentations. The results shown in Fig. 7 suggest that there is variability in the GA area estimated by our method (since GA extent is not expected to alternatively grow and decrease in time), probably due to noise. However, this variance in area is small. Further investigation of the robustness and reproducibility of area estimations of our method as well as other segmentation methods will be pursued in the future. Ultimately, our methods could enable quantitative evaluation of a longitudinal series of GA lesions which could be useful for clinical evaluation of disease. 5. Conclusions

This paper presents a semi-automated segmentation algorithm for GA in SD-OCT images. A projection image constructed from a sub-volume of the retina beneath the RPE which shows the GA abnormalities most clearly appears to improve the visualization of GA lesions. A study of the variability in segmentations between experts and within the same expert at different sessions suggests that these projection images provide a robust visualization of GA. An edge-based geometric active contour model was adopted to segment GA on the resulting RSVP projection images. Qualitative and quantitative experimental results indicate that the algorithm shows promising results when compared to expert segmentations in the patient data sets studied and that the method may be effective for the GA segmentation in SD-OCT images. This segmentation algorithm can also be used to extract and assess GA quantitative features in longitudinal OCT studies, such as the area and extent of GA. A performance comparison of our algorithm with a commercially-available GA segmentation software program suggests that our algorithm provides more accurate GA segmentations than the commercial software. Acknowledgments

The authors wish to thank Dr. Mary Durbin from Carl Zeiss Meditec as well as Dr. Gordon, Dr. Pearlman, Dr. Lit and Dr. Boyer for their help preparing and providing sample cases and results obtained from FAF images and the Cirrus software. This work was supported by a grant from the Bio-X Interdisciplinary Initiatives Program of Stanford University, a grant from the National Cancer Institute, National Institutes of Health, grant No. U01-CA-142555, and grants from the Programme of Introducing Talents of Discipline to Universities under Grant No. B13022 and Qing Lan Project.

#195911 - $15.00 USD

Received 16 Aug 2013; revised 17 Oct 2013; accepted 19 Oct 2013; published 1 Nov 2013

(C) 2013 OSA 1 December 2013 | Vol. 4, No. 12 | DOI:10.1364/BOE.4.002729 | BIOMEDICAL OPTICS EXPRESS 2750

Semi-automatic geographic atrophy segmentation for SD-OCT images.

Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in ...
3MB Sizes 2 Downloads 0 Views