MICROSCOPY RESEARCH AND TECHNIQUE 77:697–713 (2014)

An Automated System for Whole Microscopic Image Acquisition and Analysis   GLORIA BUENO,* OSCAR DENIZ, MARIA DEL MILAGRO FERNANDEZ-CARROBLES,   SALIDO AND JESUS NOELIA VALLEZ, VISILAB Research Group, E.T.S. Ingenieros Industriales, University of Castilla-La Mancha, Av. Camilo Jose Cela s/n, Ciudad Real 13071, Spain

KEY WORDS

whole slide image; virtual microscopy; microscopic image acquisition; pathology

ABSTRACT The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.53, 103, 203, 403, and 633. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented. Microsc. Res. Tech. 77:697–713, 2014. V 2014 Wiley Periodicals, Inc. C

INTRODUCTION Novel digital imaging modalities, called virtual microscopy (VM) and Whole Slide Imaging (WSI) have enabled storage and fast dissemination of image data in Pathology. These technologies encompasses the high-resolution scanning of tissue slides and derived technologies including automatic digitization and computational processing of whole microscopic slides (Punys et al., 2009; Kayser et al., 2006; Lundin et al., 2009). Weinstein et al. (2009) define VM and WSI as enabling technologies to telepathology and distinguish between fourth-generations, so-called virtual slide telepathology systems. The first generation of telepathology systems consisted in the acquisition of the microscope image and its sending to the pathologist. The acquisition of the digital image may be in realtime mode (dynamic) or in store-and-forward mode (static). The pathologist can remotely manipulate the microscope to select the slide region to visualize, requiring a new acquisition whenever this region changes. The second generation of these systems, which appeared in 1989–2000, introduces the concept of virtual slide. They incorporate merging of multiple digital images (tiles) to create a complete virtual slide. That is, all the tiles have to be merged into a final mosaic image, which is known as “virtual slide” if the digitized area matches the full slide. There are two C V

2014 WILEY PERIODICALS, INC.

methods which may be adopted to merge the tiles: tiling or stitching. The completed virtual slide can be produced automatically or interactively by second generation of telepathology systems. In the automatic mode, the microscope stage is programmed to scan an entire slide and capture images of all microscopic fields or tiles without operator intervention, whereas in the interactive mode, a pathologist remotely controls the order in which images are acquired and stitched. The third generation of telepathology systems, developed in 2000–2001 is based on rapid and hybrid virtual slide processor. They may combine automatic and interactive digitization modes or operate exclusively in the automatic scanning mode. Finally, the fourth generation developed in 2001 is based on ultrarapic slide processor Weinstein et al. (2001). The additional enabling technologies developed within the third generation of telepathology systems *Correspondence to: VISILAB research group, E.T.S. Ingenieros Industriales, University of Castilla-La Mancha, Av. Camilo Jose Cela s/n, Ciudad Real 13071, Spain. E-mail: [email protected] Received 23 October 2013; accepted in revised form 30 May 2014 The authors declare that there is no conflict of interests associated with the tools and datasets used in this article. REVIEW EDITOR: Dr. Peter Saggau DOI 10.1002/jemt.22391 Published online 10 June 2014 in Wiley Online Library (wileyonlinelibrary.com).

698

G. BUENO ET AL.

may be defined as follows. WSI is a technique with two components: (i) The creation of digital images of the entire area of a glass histopathology or cytopathology slide, and (ii) the viewing of such a large digital image slide using a virtual slide viewer. Whereas VM is the technology that emulates a light microscope using virtual slides manipulated on a computer screen using microscope emulator software Weinstein et al. (2009). The word “Virtual Microscope” was first used by the Computing Department of the University of Maryland and the Pathology Department of Johns Hopkins Hospitals from Baltimore Ferreira et al. (1997) one decade ago. They presented a software system employing a client/server architecture to provide a realistic emulation of a light microscope. The client software runs on a user’s PC or workstation, while the database software for storing, retrieving, and processing the microscope image data runs on a high performance parallel computer at a remote site. Cataly€ urek et al. (2003) present a similar work with further details of the implementation and a performance validation. To solve the main problem in providing a system that performs storing and processing very large quantities of slide image data, they presented two versions of the VM server software: (i) coupled parallel computers with a disk farm and (ii) distributed computing environments providing access to archival storage systems. No description of the digital image acquisition is made. Since the work in 1997 by Ferreira et al. different VM systems have been published. All of them discuss several techniques related to providing the performance necessary to achieve rapid response time, mainly in dealing with the enormous amounts of data (1–10 GB per virtual slide). Thus, one of the challenges of WSI is the analysis and processing of the digital images due to their size. As an example, our system produces 30,000 3 30,000 pixels images from a 1.25 cm2 area of a slide when using 203 magnification (resolution 0.37 mm/pixel). This image, stored as RGB and using a bit depth of 8 bits per channel, uses approximately 2.5 GB of disk space. The same region, digitized with 403 magnification (resolution 0.74 mm/ pixel), produces a 10 GB image. It is impossible for most computers to load and visualize these images at once, not to mention to process them, so special techniques need to be used. The development of the first automated, highresolution WSI system was done in 1999 and patented in 2004 by Wetzel and Gilbertson, Wetzel et al. (2004). The system is based on a microscope with a motorized stage, moving along X and Y-axes, a pulse light illumination system, and a stage position detector. The motorized stage moves the slide while an image of the slide is captured with a microscopic camera. The system also uses a macroscopic camera to capture a thumbnail image of the slide. The strobe illumination is used to produce aligned image tiles. Since the development of system by Wetzel et al. (2004), interest in using WSI for different applications in pathology has grown significantly, Ghaznavi et al. (2013). A good review of existing digital slide systems, divided into robotized microscopes, slide scanners, and array microscopes (i.e., third and fourth generation telepathology systems), was made in Garcıa-Rojo et al. (2006).

Most of the research works reported in the literature are focused on the implementation of VM and their evaluation for use in pathology. Demichelis et al. (2002) described a VM system to manage external robotic microscopes and image acquisition for the construction of virtual slides by means of Dynamic Linked Libraries. The virtual slides is created from lower magnification images as patchwork and saved at 20:1 JPEG compression. They can then be viewed on a computer by means of a user-friendly interface that allows to select different regions and to examine them at different magnifications. They demonstrate the system with a Leica DM RXA robotic microscope equipped with 103, 203, and 403 objectives; 3CCD video camera; and Matrox Meteor PCI Frame Grabber. They use the Leica Logical and Autofocus DLL-APIs without any modifications of these algorithms. Therefore, it does not cover those aspects related to the digital image quality such us focusing, illumination corrections, etc. The system is slow since for a histological sample tissue of 1 cm2 at 403 the construction of the digital image takes approximately 180 minutes. This system was improved by Della Mea et al. (2005, 2009). They present the eSlide system which includes autofocus algorithm and can be run on Windows as well as Mac OSX. The digital image is stored together with a metadata which include patient data and technical information related to the image features. The visualization module can also be used as a web applet. The acquisition time is not improved and the virtual slides are still composed by lower magnification images patched. Molnar et al. (2003) present a VM system applied to Axioplan 2 MOT (Carl Zeiss) microscope. The microscope functions (objectives, stage, focus, illumination, and filters) are not controlled by the VM system but from an application program of the optical microscope (OM). During the scanning process autofocusing was done using Brenner’s algorithm only at each third to fifth field of view. All the images were compressed in JPEG format and stored in the corresponding position. To reduce storage space a threshold filter was used to store only those frames with image content. The problem with this method is that there may be some loss of regions of interest (ROI). Moreover, no software mosaic alignment was used and this can produce misalignment errors of about 0.5 mm. The scanning time is reduced more than four times respective to the previous system, that is, it takes about 40 minutes to scan a sample tissue of 1 cm2 at 403. They evaluated the use of digital slide and the VM system against OM on routine gastrointestinal biopsy specimens. They found an average of 95.5% diagnostic agreement between VM and OM. The study was carried out only with two pathologists. Molnar et al. (2009) demonstrate the same VM system with a similar study but using a Mirax Scan digital slide scanner. The scanning time was up to 10 times faster and the diagnostic agreement between VM and OM was also improved up to 1%. The reasons for discordance were image quality, interpretation difference, and insufficient clinical information. The problem with misalignment and possible loss of ROI is still not solved. Another VM system, called ReplaySuite, is presented by Costello et al. (2003) and Johnston et al. Microscopy Research and Technique

AUTOMATED WHOLE SLIDE IMAGING SYSTEM TABLE 1. Main characteristics of whole slide scanning systems

CCD RGB CCD monochromatic Resolution (mm/pixel): 203 Objective 403 Objective 2.53, 103 Objective lens 203, 403 Objective lens 603–633 Objective lens Bright field digitization Fluorescence field digitization Digitization speed at 203 (*) (mm2/s) Digitization speed at 403 (*) (mm2/s) JPEG compression method JPEG2000 compression method Compressed file size 403

Scanners

Microscopes

Yes No

Yes Yes

0.47 0.23 No Yes No Yes No 1.16 0.28 Yes No 1.5 G–2 GB

0.37 0.185 Yes Yes Yes Yes Yes 0.36 0.09 Yes Yes 1.5 G–2 GB

(*) 5 Average calculated on three motorized microscope and three scanners. Microscopes: Olympus SIS .Slide, Applied imaging Ariol, and Lifespan Alias. Scanners: Hamamatsu c9600 Nanozoomer, Aperio Scanscope T2, and Zeiss Mirax Scan.

(2005). The ReplaySuite is an online software tool that presents archived virtual slide examinations to pathologists in an accessible video-like format. Delivered through a customized web browser, it utilizes PHP to interact with a remote database and retrieve data describing virtual slide examinations. Images were saved using a JPEG format at 10% compression. They do not describe any aspect of the digitization process nor the virtual slide composition. The system described in this article is based on a motorized microscope with WSI technology. This system may be included within the category of third generation telepathology systems with a rapid and hybrid virtual slide processor. The speed with a 203 objective is 0.31 mm2/s, which is on the order of the above mentioned WSI systems. The system controls all microscope functions involved in the image digitization and copes with misalignment problems by means of a stitching algorithm as well as geometric corrections. The digital slide is not compressed to avoid loss of information and in undertaking further processing. Motorized microscopes are made of a set of lenses which, when properly aligned, magnify the light reflected by the sample into the eye or another light detector. For focusing in the proper focal plane, the slice is often held by a motorized platform controlled via software. Besides motorized microscope devices another solution for whole slide scanning is provided by the scanners. Scanners capture the light reflected by a glass slice with the sample inside through a coaligned array of photodetectors. Table 1 summarizes the main features of these devices. Motorized microscopes are more flexible and customizable than scanners. They have the same functionality as traditional microscopes, but featuring motorized components. Thus they have a wide range of objective lenses, that is 2.53, 53, 103, 203, 403, and 603 and they are able to digitize in both bright and fluorescence fields. This versatility may help different diagnostic processes. A quantitative quality assessment of WSI provided by a motorized microscope and a scanner was undertaken by the authors Redondo et al. (2012a). The study concluded that the image quality of both devices is suitable for clinical, educational, and research purMicroscopy Research and Technique

699

poses. Moreover, motorized microscopes are more suitable for applications where other objective lenses different than 203 and 403 are required; as well as for fluorescence field applications. Since motorized microscopes are more flexible and customizable than scanners, it is possible to improve focusing and contrast, and therefore image quality. It is considered that a fully automated system should have at least these three components: 1. Motorized stage: It is the part where the slide is placed. Motors move the stage along the three spatial axes, translating the camera’s field of view (X and Y axes) or the focus (Z axis). The most important features of a stage are its accuracy and repeatability, as well as its speed. The first two will have a deep impact on the digitization, whereas the third one will simply make the digitization process faster or slower. 2. Illumination system: This system is dependent on the tissue types that will be digitized. When using brightfield samples, halogen bulbs have been traditionally used, although LED diode systems are becoming increasingly popular. LED systems have a great advantage: they can change light intensity or wavelength by simply adjusting its components (i.e., an RGB LED lamp may produce red light when switching off green and blue led, or white light when switching all of them on). LED systems also need less energy to work, thus emitting less heat than halogen bulbs. When using fluorescence samples, the light source should be used at the same time as a filter cube. A filter cube is a fluorescence filter sets containing three essential filters (excitation filter, dichroic mirror, and barrier or emission filter Bueno et al. (2011). The illumination system is more complicated, since it may require an additional lamp and optic fiber to bring the light to the filter cube. 3. Digital camera: Required for acquiring the tiles that will compose the final image. This is arguably the most variable component of the system, and it is also very dependent on the tissue type that will be digitized. The most important features are the camera sensor (whether it is sensitive to color or not), and its resolution, which will also determine the scale of the image (typically in mm/pixel). Analysis of fluorescence images require increase the exposure time, therefore a cooling system is desirable, since the images may be noisy if the sensor is hot. It is worth mentioning that monochrome cameras are the preferred method of acquisition for fluorescence microscopy, since color cameras limit the spectral range of the emitted light. This is because when a color camera is used to acquire fluorescence images, the emitted light must pass through two sets of dichroic mirrors and filters (filter cube and camera), Weber and Menko (2005). Moreover, monochrome cameras can achieve higher spatial resolution than color cameras and have increased sensitivity. A WSI system is composed of two different though equally important parts: hardware and software. The hardware determines many of the critical features of the system, such as accuracy, repeatability, or image resolution. The software is in charge of controlling the

700

G. BUENO ET AL.

hardware and providing the functionality to the user. Moreover, it may also include some advanced tools, such as autofocus or image processing algorithms. It is important to ensure both hardware and software quality: the best hardware might be a waste of money if the software does not take advantage of it, and, in the same fashion, low-quality hardware will produce poor results no matter how well developed and tested the software is. To date, however, the tools for processing and analyzing digital microscopic images are still poorly developed and validated on clinical environments. Therefore, the adoption of WSI systems in pathology has been limited due to the significant challenges, including the management of microscopic image data. It is necessary to investigate high-performance computational infrastructures, as well as tools to efficiently process these extremely high-resolution images together with the amount of clinical data associated to them (Bueno et al., 2009, 2012; Donovan et al., 2009; Garcıa-Rojo et al., 2012; Lezoray et al., 2011). Prompted by the need to solve the above mentioned problems in motorized microscopes and making use of their capability to integrate hardware and software, a whole slide microscopy system has been designed and implemented. The system is able to provide whole slide digital images both in brightfield and fluorescence. It is possible to keep control of the digitization process all the time. To this end different hardware components have been integrated and software able to solve different problems of the digitization process has been implemented. The software can cope with illumination and geometrical distortion problems during the slide scanning, as well as with tile stitching. This article shows the implemented system, the quality assessment done to the digital images using metrics based on sharpness, contrast, and focus and it includes the integrated image processing algorithms for different applications, such as, cancer research by means of prostate biopsies and lung cytologies, as well as neurodegenerative diseases by means of brain autopsies. We hope that our experiences and insights can be useful to researchers, developers and integrators in this field. Thus, section “Hardware and Software Setup” describes different aspects of the hardware and software of the system. Section “Results” shows the implemented functions and the results obtained when processing the above mentioned samples. Finally, the main conclusions are outlined in section “Conclusions.”

HARDWARE AND SOFTWARE SETUP We have built a system based on a DM6000B Leica Microscope along with a K€ohler transmitted light source, Leica EL6000 incident light source for fluorescence, a filter cube with DAPI/Texas Red/FITC dichroic, excitation and emission filters, a Leica DFC 300FX digital camera (1.3 megapixels, color, 24bpp), M€ arzh€ auser motorized stage, and six objective lens of 2.53, 53, 103, 203, 403, and 633 magnification. The mechanical accuracy of the motorized scanning stage for X/Y and Z directions is 0.5 lm. Additionally we have attached the following components:

Fig. 1. Motorized microscope. Whole slide imaging system hardware based on a DM6000B Leica Microscope, motorized stage, six objective lens (2.53, 53, 103, 203, 403 and 633), incident light source and filter cube for fluorescence, a color a Leica DFC 300FX digital camera. Additionally a monochrome Retiga SRV digital camera, LED transmitted illumination and active stabilization table have been added to the hardware. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

 LED transmitted illumination (RGB).  Retiga SRV digital camera (1.3 Megapixels, monochromatic, 12 bpp).  Active stabilization Table Stable TS-150. The system has also been covered with a polymethylmethacrylate (PMMA) box with black stained walls to avoid distortions due to external illumination. The hardware is connected to a computer equipped with an OASIS Blue controller card, which is responsible for managing the movement of the motorized stage. As a result of the integration of components from different manufacturers, the system (see Figure 1) is heterogeneous, and the original software solution provided by the manufacturer of the base system cannot use the other components. For this reason it has been necessary to develop a specific software that may exploit the full capabilities of this new system, and also consider the possibility that in the future any of the current components may be replaced. To control this system, we have developed a software which takes advantage of all the hardware features. This software has been designed using software engineering patterns, such as Singleton and Observer, Pressman (2002). The Singleton scheme is used in Microscopy Research and Technique

AUTOMATED WHOLE SLIDE IMAGING SYSTEM

701

Fig. 2. Four-layer structure. The layer design consists of four parts: Hardware, Logic components, Scanner, and Graphical User Interface (GUI).

order to centralize multiple connections between objects, avoiding more than one instance of an object to be running at the same time. The Observer scheme is used in order to notify every hardware change to the user. Apart from patterns, the application has been designed using layer abstraction, so that any new improvement can be easily added, improving scalability. This layer design consists of four parts, as shown in Figure 2:  Hardware: physical components of the system (stage, camera, led, etc.).  Logic components: they represent a software abstraction of the hardware. They use manufacturer Software Development Kits (SDKs) in order to communicate with the hardware.  Scanner: communicates all the logic components in order to perform tasks that involve more than one component (i.e., autofocus requires the stage to move along the Z axis while the camera acquires images that are evaluated). It is implemented using the Singleton pattern.  GUI: Allows the user to communicate with the application, and vice versa. Microscopy Research and Technique

Apart from layer abstraction, we have also designed the system with camera abstraction. As mentioned before, our system features two different cameras, and we wanted them to be used indistinctly, so we have designed a common interface. This camera abstraction also increases system scalability, because if any of the cameras is changed in the future, it will just need to implement this interface in order to be fully functional. Furthermore, the Leica DFC camera is accessed through a TWAIN driver, TWAIN Group (2000), so the logic object that controls it could potentially manage any other camera that uses this standard. Figure 3 shows the class diagram of the system. Here is a brief explanation of the role of each class:    

SadaimDlg: Main GUI window. AcquireDlg: GUI dialog to fill digitization details. CameraDlg: GUI dialog for camera swapping. Observer: Implementation of the Observer design pattern to notify both the GUI and the Singleton object of hardware changes.  Scanner: Centralizes the access to the hardware and perform tasks that involve the use of more than one hardware component at the same time.

702

G. BUENO ET AL.

Fig. 3. Class diagram composed of: Main GUI window, GUI dialog to digitization and for camera exchange, Observer, Scanner, Logic abstraction for all hardware, Software Control, and Interface to both cameras.

 MicControl: Logic abstraction of all the microscope hardware (stage, objective turret, and filter turret).  LedControl: Logic abstraction of the LED illumination control.  ImControl: Controls all the operations related to image storage, displaying, and processing.  Camera: Common interface to both cameras.  TwainCamera: Logic abstraction of the Leica color camera.  RetigaCamera: Logic abstraction of the Retiga monochrome camera. Finally, it is worth describing the design of the GUI. We have opted for providing the user with a clean and useful interface, where most of the window area is used to display the images captured by the camera. The most relevant information, such as stage position, camera in use, or exposition time, is displayed on the status bar. The most used functions (stage movement, objective/filter change, autofocus, and digitization) are placed on the sidebar. Some other functions that are not so frequently used (such as rotation calibration or single image export) are accessible through the main menu. Figure 4 shows the main window of the application.

The quality of the images acquired by the different cameras was analyzed using quantitative metrics based on physic parameters with non-reference image. The metric has been applied to 150 images, that is, 10 images per tissue type: brain autopsies, prostate biopsies and lung cytologies, at five different magnifications: 2.53, 103, 203, 403, and 633. Figure 5 compares the images taken with both cameras for the different samples. The measurements used for image quality assessment were sharpness, contrast, and focus. The metrics used are briefly described below and the results obtained are illustrated in section “Hardware Integration.”  Sharpness (CPBD): Sharpness is measured by mean of the Cumulative Probability of Blur Detection (CPBD). The CPBD allows objective assessment of the sharpness without a reference image. It is based on the cumulative probability of blur detection (Ferzli and Karam, 2009; Narvekar and Karam, 2011) and it corresponds to the percentage of edges in which blur cannot be detected. The CPBD will have smaller values as the blur of an image increases, and therefore it increases when the sharpness of the image increases. Microscopy Research and Technique

AUTOMATED WHOLE SLIDE IMAGING SYSTEM

703

Fig. 4. Graphical User Interface of the WSI system. The GUI is virtual slide viewer that allows the viewing of the large digital image slides. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

 Contrast (FContrast): Nanda and Cutler (2001), proposed a measure of focus from the evaluation of image contrast. This is calculated as the difference in absolute value of each image pixel within its eight neighboring pixels, Lorenzo et al. (2007). The final value is the sum of the values obtained for each image pixel according to the following equation: XX FContrast 5 Cðx; yÞ (1) x

y

where the contrast C(x, y) for each image pixel in the gray level image I(x, y) is obtained by: Cðx; yÞ5

y11 x11 X X

jIðx; yÞ2Iði; jÞj

(2)

i5x21 j5y21

 Focus (FSML): Focus is measured by the sum of modified Laplace transform, FSML. This metric is based on the Laplacian linear differential operator, !2I(x, y), by which the sharpness of the image, I(x, y), will be assessed, Nayar and Nakagawa (1994). The metric FSML sums the absolute values of the image convolution obtained by the Laplace operator, according to the following equations: Microscopy Research and Technique

r2 Iðx; yÞ5 FSML 5

XX x

d2 I d2 f dI 2 dy2

jLx ðx1yÞj1jLy ðx; yÞj

(3)

(4)

y

This is a metric with second order derivatives and therefore it works like a high pass filter in the frequency domain. This method has very high accuracy but it is very sensitive to noise. Similarly to the previous metrics, values close to 1 means a better quality image, Batten (2000); Ferzli and Karam (2009). RESULTS Hardware Integration As mentioned above, our system features two different digital cameras, as well as LED illumination. Our aim is to work with any of the cameras in any situation, so that pathologists do not have to care of particular camera details, as they will just get 24 bpp color images (RAW format non compressed) when digitizing a slide. These 24 bpp images are directly obtained from the Leica camera, simply by using white light as source illumination. When using the monochrome Retiga-

704

G. BUENO ET AL.

Fig. 5. Samples of WSI from brain autopsy (first row), prostate biopsy (second row) and lung cytology (third row) scanned with Leica (first column) and Retiga (second column) cameras. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

SRV camera, it is necessary to acquire three different images, using one light channel (red, green, or blue) at a time. Then, these 12 bpp monochrome images are shifted to 8 bpp, and finally combined into the final 24 bpp image. The RGB LED illumination has been used to acquire each color channel in brightfield but the filter cube is used for fluorescence. The color camera is on average 1.18 times faster but the composed images have better quality, so the decision of using one or the other is dependent on the pathologist needs. Figure 6 shows a comparison of the image quality obtained with the three metrics (sharpness, contrast, and focus) applied to the 10 whole slide digital images acquired from the three tissue types (brain autopsy, prostate biopsy and lung cytology). In this study the images were captured from lung cytologies stained with papanicolau and calretinin, the latter being a

weaker staining. Lung cytologies are mainly liquid acquired with fine needle aspiration, and they are the thinnest case among the studied cases. They may have overlapping cells or cells mixed with blood cells or even absence of cells. This means that some images will have low information content and are prone to focusing errors and therefore, low sharpness and contrast. Other analyzed samples were prostate biopsy and brain autopsy, whose density is similar to biopsy images. They are thicker than cytologic slides and generally have well-defined structures. Complementary metrics may be applied to measure image quality in terms of color and stain. These properties require objective metrics based on perceptual features, together with subjective psychophysical tests, since there is no unanimous consensus on quality metrics, Redondo et al. (2012a). Microscopy Research and Technique

AUTOMATED WHOLE SLIDE IMAGING SYSTEM

705

Fig. 6. Image quality provided by the Retiga and Leica cameras for different tissue types digitized at 203. Image quality is given by three metrics: sharpness (first row), contrast (second row), and focus (third row) applied to autopsy (first column), biopsy (second column), and

cytology (third column) digital image samples. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary. com.]

The results shown in Figure 6 reveal how the performance of the Retiga camera is superior to Leica for all metrics. This allows us to conclude that the images acquired by the Retiga camera on aspects of sharpness and contrast have a higher quality compared with the images acquired by the Leica camera.

in terms of accuracy, computational cost, and focusing curve shape. According to the results, most of the methods exhibit a low accuracy error. However, in terms of computation time the fastest algorithm was the Threshold pixel count (TH). The TH is based on measurements of peak height and valley depth, Groen et al. (1985). The methods based on correlation measures and image contrast particularly the Vollath5 (VOL5), Vollath (1988) and the variance (VAR), Yeo et al. (1993) were the most accurate. Thus, our autofocus algorithm, called Smart Focus applies fastest algorithm, such as TH, as a coarse search and then to perform a finer search the VOL5 method is used. VOL5 works well for both biopsy and cytology microscopy images. In order to find the stage position where the image has the highest focus measure, an efficient search algorithm should be chosen. We apply a search algorithm based on a straightforward search along the whole Zaxis range. The algorithm performs as follows. It starts moving the stage to an initial position (the first focus position), and establishing lower and upper limits around it. The larger the magnification the smaller is the limit interval. After that, an image is acquired and evaluated. Then, the stage moves upward and another image is taken and evaluated. If the autofocus function

Software Integration 1. Autofocus: To ensure proper image quality, the acquired tiles must be focused. When focusing the image, it is necessary to have some criteria to move the stage in the Z axis and to evaluate the focus of the image (Kayser et al., 2008; Xie et al., 2007). The premise behind most of these criteria is that an unfocused image results from the convolution of the image with a certain point-spread function (PSF), which usually produces a decrease in the high frequencies of the image. This result can also be seen on the assumption that well-focused images contain more information and detail (edges) than unfocused images, Yeo et al. (1993). Sixteen focusing algorithms were analyzed for biopsy and cytology microscopy images by the authors, Redondo et al. (2012b). The algorithms were evaluated Microscopy Research and Technique

706

G. BUENO ET AL.

Fig. 7. Results of the autofocus algorithm on a biopsy sample, using 103 magnification and Leica camera. The time (seconds) is indicated on the image upper-left corner. The algorithm was able to perform autofocus within 7.24 seconds. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

value obtained by the second image is higher than the value of the previous image, then the focus position is set to the latter. The process continues until the algorithm obtains smaller evaluation values two consecutive times, or when one of the limits is reached. When one of these conditions is satisfied, the stage gets back to the start position and repeats the process moving in the opposite direction. When the positions have been evaluated in both directions, the process is repeated in a narrower area around the best focus position, moving the stage slower, that is 1/20 the interval length. The algorithm provides fairly good results with a reasonable speed. It is important that the focus position is between the established movement limits; otherwise it will never be reached. This methodology has been chosen because is the one that surely gives the best focus position, because all focus positions are evaluated. However, there are faster methods based on: (i) iterative subdivisions of the position space or (ii) prediction that try to mathematically approximate the focus position by polynomial interpolation. Some examples are dichotomous search and Fibonacci sequence, Della Mea et al. (2005). The largest autofocus error is produced in tissue with liquid parts, such as lung cytologies. The autofocus has been analyzed with the 150 samples at 103, 203, 403, and 633 magnifications. The algorithm was able to perform autofocus with an average time of 11.89 seconds for images at 633. The maximum time was 15.11 seconds and the minimum was 10.78 seconds for tissue images at this magnification, 633. The

autofocus time usually decreased for lower magnifications. The average time for images digitized at 403, 203, and 103 were 8.97, 7.12, and 6.19 seconds, respectively. The autofocus algorithm gave false results only in five cases. These cases were: three lung cytologies with a weak stain and two cases with high amount of blood within the pleural liquid. The position of the stage for this test was the middle position of the maximum displacement in Z axis, this was considered the (X, Y, 0) position. In order to reduce computational time when digitizing WSI the focus is made every 2 mm2, that is every 12 slides at 203. However, when the slides have irregular surface focus was made every 1 mm2. Choosing a start position near the focus position makes the algorithm faster. This is the case when digitizing WSI, the first tile take longer than the rest of tiles where the starting point is the previous focus position. The autofocusing time is similar for both cameras, because the autofocus with the Retiga camera is done with monochrome images, so acquisition time is nearly the same. Figure 7 illustrates the autofocus process and result of the algorithm for the Leica camera. 2. Tile Stitching: As above mentioned in the Introduction, prior to any other task, the tiles have to be merged to create the final or mosaic image. There are two methods which may be adopted:  Tiling: It consists of simply joining all the tiles, not considering anything but the position of each tile. This technique is fast, but its results are only Microscopy Research and Technique

AUTOMATED WHOLE SLIDE IMAGING SYSTEM

707

optimal when the acquisition process is perfectly aligned and the field of view (fov) of the camera calibrated so that it is exactly the same as the movement of the stage. However, there is always an overlapping region given by the stage displacement.  Stitching: It consists of process the overlapping region between tiles. It is slower than tiling, because there are some calculations that need to be done, but it is also much more robust, since any error in the stage movement can be corrected by modifying the overlap between each pair of images, adjusting it to produce a perfectly aligned final image. We have developed the stitching method by means of a rigid registration algorithm (Aguilar et al., 2011). Results of the tiling and the developed stitching algorithm are shown in Figure 8, where it is possible to compare the union of tiles by means of both methods. It is possible to see how the tiling produces duplicated regions due to the overlapping sections and how this problem is corrected with the stitching algorithm. Thus, once all tiles have been acquired, they should be stitched and combined into a single image. Depending on the magnification of the objective used in the digitization, and also on the size of the region digitized, the whole image size may be huge (several GB). The developed stitching algorithm is able to compose images of any size, regardless of the physical memory of the computer on which it is running. This algorithm uses the overlapping part of each pair of tiles to compute the best matching position both in rows and columns. Finally, it builds the merged image one line at a time, without memory problems when stitching the tiles. Every image in this article (except those which only represent one tile) has been merged using this algorithm. 3. Geometric Corrections: After several focus tests, both manual and automatic, it was observed that sometimes the digitization produced a small geometric distortion including rotation errors. This is mainly due to: (i) stage misaligned, that is, the objectives’ central axes and the stage are not perfectly perpendicular and (ii) because the cameras attached to the microscope are not perfectly fixed. Since any vibration could move the cameras (and the mirror system that directs light to one camera or the other is operated manually), it is needed to have a fast calibration system that could calculate the camera misalignment and correct it when it appears. The correction algorithm that we have developed is really fast and simple, although it requires some user interaction. To calibrate the rotation, the user has to choose a representative point of the image (x1, y1). Then, the stage is moved along the X axis, and the same point is clicked again by the user (since the stage has moved, that point will now have coordinates [x2, y2]). If the two points are perfectly aligned, then y1 5 y2, since the stage was only moved along the X axis. However, if the points are not horizontally aligned, it is easy to compute the rotation angle a using the coordinates of the reference points:   y2 2y1 (5) a5arctan x2 2x1 Microscopy Research and Technique

Fig. 8. Comparison of tiling (top) and stitching (bottom). The tiling shows overlapping regions which are corrected in the stitching process by means of an image rigid registration technique. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

When we solved this rotation problem, we started to work on correcting the error due to possible stage inclination. The geometric distortion is a problem when the tiles are to be merged, since structures in one of the images are larger than in the other, and also slightly displaced from what would be their correct position. The displacement was not regular, neither proportional to the position of the camera, so a translation/ rotation problem was discarded at this point. Rather, we noticed that the tiles seemed to be almost perfectly merged around the middle of the images, but as long as we moved to the corners of the tiles, the quality of the mosaic dropped drastically. We discovered that the structures on one of the tiles tended to be below its position on the upper area of the image, although structures tended to be above its position on the lower area of the image. That is what drove us to realize that what we had to correct was a perspective problem. To correct these images, we use warp mapping. Considering that the central point of the slide is the one whose position is well-aligned with the camera, we deform the rectangular tile to make it trapeziumshaped. This trapezium will look larger than the original tile on one of the sides of the image, and smaller on the opposite one, whereas the same size will remain in the middle point. To perform this perspective correction, we use a warp matrix (6), and then we map each pixel in the destination image (xd, yd) to a pixel in the source image (xs, ys) (7). Since the mapping requires sub-pixel precision, we used bilinear interpolation to compute each pixel value. 2

x

3 2

a0

6 7 6 6 y 756 b0 4 5 4 w c0

a1

a2

3 2

xs

3

b1

7 6 7 6 7 b2 7 5  4 ys 5

c1

c2

1

(6)

708

G. BUENO ET AL.

Fig. 9. Impact of geometric corrections in the merge area of two images. Detailed example with (a) No correction, (b) Rotation correction only, and (c) Rotation and perspective corrections. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

2

a0

6 with 6 4 b0 c0

a1

a2

3 2

0:998077

b1

7 6 6 b2 7 554 0:000718

c1

c2

0:000001

0 0:999038 0

0

3

7 0:500481 7 5 1

(7) xs 5

x a0 xd 1a1 yd 1a2 5 w c0 xd 1c1 yd 1c2

(8)

ys 5

y b0 xd 1b1 yd 1b2 5 w c0 xd 1c1 yd 1c2

(9)

After testing the corrections with several images, we observed that the tiles could be effectively merged without a noticeable edge in the area where the images merge. Figure 9 illustrates the results of the correction process. An additional source of problems in histological images is due to the mechanical process of cutting the tissue sections. The section thickness may slightly vary and the cover might be not parallel to the glass slide (Della Mea et al., 2005). One possible solution to this problem is to conduct the autofocus on each tile, though this may be computational expensive for the complete WSI. 4. Illumination Corrections: Apart from the geometric corrections, it is important to carry out illumination correction. The illumination should be uniform along the acquired images, so that the background has the same color all over the image. It is very common in microscopes that the illumination is not uniform, depending on the type of light and the conditions of the acquisition. This non-uniform illumination has a great impact over acquired images, since color information may be altered. In our case, we have LED illumination and a black, opaque case that covers the entire microscope, blocking all external light. We tried other illumination such as transmitted light based on a mercury metal halide bulb with a liquid light guide and with K€ohler light management and the best results were obtained with LED. We have also developed a background division method to make the illumination uniform. Our illumination correction system consists of a background arithmetic division. At first, we acquire a pattern image in a region of the slide where no tissue

is present. Then we divide each image acquired by the pattern image, rescaling the result of the division afterward. The result of that operation is an image where color information has not been affected by the non-uniform illumination. To ensure better results, it is important to take clean pattern images. Our approach to that has been to have accumulative patterns. We take sixteen pattern images in slightly different positions of the slide, summing them up into another image. The result of the sum is another image, with a bit-depth four bits greater than the original images. Moreover, by using accumulation patterns we minimize the noise present on the slide, such as dust, or scratches, and keep only illumination information. Figures 10 and 11 show the illumination patterns used with 53 magnification and the Retiga camera. Since we only move the stage while acquiring the accumulative pattern, the non-sensor related noise present on the camera is present on all the subpatterns, and so it is present on the accumulation pattern too. Rather than being a problem, this is helpful because the noise that comes from the camera will also be removed when performing illumination correction. Figure 12 shows the result of applying this illumination correction. Finally, it is worth mentioning that it is convenient to take the pattern image using the same slide that is to be digitized when high color fidelity is required, because even the thickness of the protective paraffin sheet may have some minor impact on the light that the camera receives, and thus on the color of the corrected image. Fluorescence Our system is capable of acquiring fluorescence images. To accomplish this task, it is necessary to use an incident light source, as well as filter cubes. Fluorescence slides contain tissue that has been stained with a fluorescent biomarker, which is only visible when illuminated with light of a specific wavelength. Depending on the tissue type, the biomarker, and filter cube used, different exposition times are necessary, and in any case these times are usually much larger than brightfield ones. Sometimes, tissue is stained with more than one biomarker. In these cases, it is necessary to use more than one filter cube, and therefore to acquire more Microscopy Research and Technique

AUTOMATED WHOLE SLIDE IMAGING SYSTEM

709

Fig. 10. Illumination patterns used with Retiga camera acquisitions (53).

Fig. 11. Illumination patterns used with Leica camera acquisitions (53).

than one image. Our approach here is to combine the images and merge them into an RGB image, loading one image into each channel, so that pathologists can distinguish between different structures using color information rather than looking to different images. Currently the system is being tested with samples stained with the FISH fluorocrome. For this case a DAPI filter is used to localize the nucleus and a double filter is used to localize green and red signals which characterize the gen. Figure 13 shows the results after digitizing a region with nucleus using the DAPI filter. It is important to mention that fluorescence images do not require any illumination correction, though the Microscopy Research and Technique

same geometric corrections as in brightfield are needed in order to compose the mosaic image correctly. 1. Image Processing: Finally, once the mosaic is finished, the images are ready to be examined by the pathologist. It is very helpful for them to have some processing algorithms that assist them in their job. We have developed a few algorithms that work with different tissue and applications. These algorithms are focused on the automatic search of ROI and based on blob analysis. The algorithms may be divided into two categories:  ROI Detection at magnifications higher than 103: Featured on biomarker characterization and

710

G. BUENO ET AL.

Fig. 12. Result of applying illumination correction to autopsy images. The example shows a mosaic comprising four tiles without illumination correction (a) and with illumination correction (b). [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

image processing algorithm uses information from the hue and ROI size.  ROI Detection at low magnification: Featured on those WSI where the tissue area is substantially smaller than the slide. Therefore, to avoid memory problems, it is preferred to scan the sample first at low magnification to detect the ROIs and then scan them at higher magnification. This is the case of cytologies and tissue microarrays (TMA).

Fig. 13. Image digitization in fluorescence using a DAPI filter. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Fig. 14. ROI detection in brain autopsy WSI (3 mm2 area, 0.74 mm/ pixel, 2 3 2 tiles, 103). The tissue example on the right is the original image and the tissue sample on the left is the processed image with ROI detection. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

stained structure quantification. This is the case of the brain autopsies where the objective is to detect and quantify brown stained structures. Thus, the

In brain autopsy images, we look for dark structures corresponding to the biomarker’s response. The samples were stained using alpha-synuclein immunohistochemistry. The dark structures have a brown hue which is significantly different from all other areas. There are other dark structures which are purple, although they usually are smaller than the ones of interest, and more round. Thus the segmentation is made using color and size information. Accuracy increases with the magnification, varying from 70% to 80% of accurate detections and 15% to 20% of false positives at 53 magnification, to 95% of accurate detections and less than 3% false positives at 403 magnification. Figure 14 shows an example of the image processing carried out in autopsies. The first step for ROI detection in cytology and TMA tissue slides consists of detecting the main tissues that are one or several tissue cores. In the case of cytology samples, when dark stain is used, such as Papanicolau, the tissue core can be easily distinguished from the background using color and shape information. By thresholding the image with color information, the core is segmented with fairly good accuracy. Furthermore, morphologic operations have been also used in order to improve the blob compactness in cases where the tissue is not as colored as it might be expected. This operation consists on a first dilation of the blobs to make them more compact, followed by an erosion in order to remove noise. Finally, a new dilation is performed to reconstruct the full blob. Figure 15 shows the results of the ROI detection algorithm applied to dark stain cytology. When weak stain is used, such as TSA (trichostatin A on bright field), the detection of the core is far more difficult, because usually it cannot be distinguished Microscopy Research and Technique

AUTOMATED WHOLE SLIDE IMAGING SYSTEM

from the background. In fact, most automatic systems do not try to detect these tissue cores, Bueno et al. (2008). Using color information to threshold the image just produces separate small blobs. Considering that

Fig. 15. ROI detection in dark stain Cytology (132 mm2 area, 0.74 mm/pixel, 6 3 8 tiles, 103, Retiga camera). [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Fig. 16. ROI detection in weak stain Cytology (132 mm2 area, 1.48 mm/pixel, 6 3 8 tiles, 53, Retiga camera). [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

711

the expected shape is an almost perfect circle can be helpful, although we need to know where the boundaries are located. The approach we have used here has been the same as with dark stain, although using more iterations in the morphologic operations. Results are similar to the dark stained cytology, except that we obtain a digitized ROIs lightly larger than the actual tissue core. However, this is not a problem since, as mentioned before; it is preferable to have a larger digitized area than to lose any tissue information. Figure 16 shows the results of the ROI detection algorithm applied to weak stain cytology. The procedure applied to TMA slides is similar to dark stained cytology samples. The stain is usually strong, but the regions where tissue cores are located are smaller. The shape of the tissue cores is also more variable, being most of them round, although there are also many stripe-shaped ones due to broken tissue cores. The main characteristic of these slides is that the tissue cores are aligned into a two-dimensional array. We have used the same procedure as described above. A first segmentation is made based on color, and then morphologic operations are applied. The morphologic operations are constrained to the size and shape of the cores to avoid merging two of them. Blobs that are not fully present in the image (cut by the edges of the image) may be also detected if they have a

Fig. 17. ROI detection in TMA (132 mm2 area, 1.48 mm/pixel, 6 3 8 tiles, 53, Retiga camera). [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

TABLE 2. Average time (seconds) required to digitize, merge, and process the histological image samples digitized at 103 Tissue type

Camera

Brain Autopsy Brain Autopsy Lung Citology Lung Citology Prostate Biopsy Prostate Biopsy

Retiga Leica Retiga Leica Retiga Leica

Digitization (s)

Stitching (s)

Processing (s)

Speed (mm2/s)

121.2 97.0 127.0 117.0 130.0 107.0

15.2 15.2 13.5 13.0 12.0 13.0

25.6 20.0 2.5 2.0 9.0 8.0

0.31 0.38 0.29 0.32 0.29 0.35

The speed in mm2/s for the tested 37 mm2 area is shown in last column.

Microscopy Research and Technique

712

G. BUENO ET AL.

large size. Figure 17 shows the results of the ROI detection algorithm applied to a TMA WSI sample. Table 2 summarizes the average times required to digitize and merge 48 tiles at 103 (1,376 3 1,024 pixels each) of different tissue types with both cameras. Representing 37 mm2 of tissue with a resolution of 0.74 mm/pixel for tiles at 103. The autofocus is made every 2 mm2 that is every 12 slides (four times for this test). The table shows that digitization is slower if a monochrome camera is used, since each tile has to be acquired three times (one per light channel). Stitching times are similar, because they simply depend on the already acquired tiles, and processing times are slightly larger when using the monochrome camera, although the difference is probably too small to be significant. The test was also done with tiles at 203 (resolution 0.37 mm/pixel) and 403 (resolution 0.185 mm/ pixel)for the same area, the average digitization speed was 0.31 and 0.93 mm2/s for 203 and 403, respectively.

acquisition, and (ii) developing algorithms for image processing. The fluorescence acquisition is complicated due to the fragility of the samples; they may lose the fluorescence if lighting is very strong or if they are on the scanning process for too long. Moreover, the need for high exposure times makes it difficult to handle these samples, because their focus (manual or automatic) and visualization are slowed down considerably. Furthermore, classification and ROI quantification algorithms are being developed, particularly for breast TMA and biopsies.

CONCLUSIONS This article presents a system based on a motorized microscope with WSI technology developed to acquire and analyze whole slide images for anatomical pathology applications both brightfield and fluorescence. We have shown the methods used in order to make the system as fast, robust, and flexible as possible dealing with these high resolution images (several GB). The developed software and automation control system have been explained and illustrated, together with the proposed solution to common problems in microscopy, such as focus, illumination correction, geometrical corrections and tile stitching. The speed of the system with a 203 objective is 0.31 mm2/s which is on the order of existing microscope WSI systems. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy and they can achieve higher spatial resolution than color cameras and have increased sensitivity. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies, and lung cytologies, at five magnifications: 2.53, 103, 203, 403, and 633. This application can serve as a starting point for the development of a complete microscope system, which can be applied to different fields in biomedicine, particularly in Pathology. This system must include additional algorithms of image processing functionality that extend and complement the existing ones. Thus, the aim of the system is to have the ability to be flexible enough to be used with several tissue samples both from brightfield and fluorescence, and treat them in a consistent and similar way, taking advantage of common features, and using specific tissue features when necessary. Moreover, the system is also capable of integrating simultaneously different software tools for image analysis. As future lines we are currently working on two fronts: (i) the ability to incorporate system fluorescence

REFERENCES

ACKNOWLEDGMENTS The authors acknowledge partial financial support from the Spanish Research Ministry through project DPI2008-06071 and the FP7 EC Marie Curie Actions through AIDPATH project contract 6154. The authors want to thank their collaborators, Marcial Garcıa-Rojo specialists at Dpt. Pathology at Hospital de Jerez de la Frontera and his MPhil researcher at VISILAB, Juan Vidal.

Aguilar C, Fern andez M, Vidal J, V allez N, D eniz O, Salido J, Bueno G. 2011. Uni on autom atica de im agenes microsc opicas de alta resoluci on. In: Congreso Anual de la Sociedad Espa~ nola de Ingenierıa Biom edica (CASEIB). pp. 10–17. Batten CF. 2000. Autofocusing and astigmatism correction in the scanning electron microscope. M.Phil. thesis, Univeristy of Cambridge, Cambridge, UK. Bueno G, D eniz O, Gonz alez-Morales R, Vidal J, Salido J. 2011. Microscopic imaging. In: Schelkens GCP, Thienpont H, editors. Optical and digital image processing: Fundamentals and applications. Wiley-VCH, Chapter 13. pp. 273–293. Bueno G, D eniz O, Salido J, Garcıa-Rojo M. 2009. Image processing methods and architectures in diagnostic pathology. Folia Histochem Cytobiol 47(4):691–697. Bueno G, Gonz alez R, D eniz O, Garcıa-Rojo M, Gonz alez-Garcıa J, Fern andez-Carrobles M, V allez N, Salido J. 2012. A parallel solution for high resolution histological image analysis. Comput Methods Prog Biomed 108:388–401. Bueno G, Gonz alez R, D eniz O, Gonz alez J, Garcıa-Rojo M. 2008. Colour model analysis for microscopic image processing. Diagn Pathol 3(Suppl 1):S18. Punys M, Slodkowska J, Schrader T, Daniel C, Blobel B. 2009. Digital pathology in Europe: Coordinating patient care and research efforts. Stud Health Technol Inform 150:997–1001. Cataly€ urek U, Beynon MD, Chang C, Kurc T, Sussman A, Saltz J. 2003. The virtual microscope. IEEE transactions on information technology in biomedicine. IEEE Eng Med Biol Soc 7(4):230–248. Costello S, Johnston D, Dervan P, O’Shea D. 2003. Development and evaluation of the virtual pathology slide: A new tool in telepathology. J Med Internet Res 5(2):e11. Della Mea V, Bortolotti N, Beltrami C. 2009. eSlide suite: An open source software system for whole slide imaging. J Clin Pathol 62(8):749–751. Della Mea V, Viel F, Beltrami C. 2005. A pixel-based autofocusing technique for digital histologic and cytologic slides. Comput Med Imag Graph 29:333–341. Demichelis F, Barbareschi M, Dalla Palma P, Forti S. 2002. The virtual case: A new method to completely digitize cytological and histological slides. Virch Arch 441(2):159–164. Donovan M, Costa J, Cordon-Cardo C. 2009. Systems pathology: A paradigm shift in the practice of diagnostic and predictive pathology. Cancer 115(13 Suppl):3078–3084. Ferreira R, Moon B, Humphries J, Sussman ADP, Miller R, Demarzo A. 1997. The virtual microscope. In: Proceedings of American Medical Informatics Association Annual Fall Symposium, AMIA, Nashville, TN. pp. 449–453. 25–29 October. Ferzli R, Karam LJ. 2009. A no-reference objective image sharpness metric based on the notion of just noticeable blur (jnb). IEEE Trans Image Process 18(4):717–728. Garcıa-Rojo M, Blobel B, Laurinavicius A. 2012. Perspectives on digital pathology: Results of the COSTAction IC0604 EURO-

Microscopy Research and Technique

AUTOMATED WHOLE SLIDE IMAGING SYSTEM TELEPATH, Studies in Health Technology and Informatics, IOS Press. Garcıa-Rojo M, Bueno G, Peces C, Gonz alez J, Carbajo M. 2006. Critical comparison of 31 commercially available digital slide systems in pathology. Int J Surg Pathol 14(4):285–305. Ghaznavi F, Evans A, Madabhushi A, Feldman M. 2013. Digital imaging in pathology: Whole-slide imagingand beyond. Annu Rev Pathol 8:331–359. Groen F, Young I, Ligthart G. 1985. A comparison of different focus functions for use in autofocus algorithms. Cytometry 12:81–91. Johnston DJ, Costello SP, Dervan P, O’Shea DG. 2005. Development and preliminary evaluation of the VPS Replay Suite: A virtual double-headed microscope for pathology. BMC Med Inform Decision Making 5:10. Kayser G, Radziszowski D, Bzdyl P, Sommer R, Kayser K. 2006. Theory and implementation of an electronic, automated measurement system for images obtained from immunohistochemically stained slide. Anal Quant Cytol Histol 28(1):27–38. Kayser K, G€ortler J, Metze K, Goldmann T, Vollmer E, Mireskandari M, Kosjerina Z, Kayser G. 2008. How to measure image quality in tissue-based diagnosis (diagnostic surgical pathology). Diagn Pathol 3(Suppl 1):S11. Lezoray O, Gurcan M, Can A, Olivo-Marin JC. 2011. Whole slide microscopic image processing – special issue editorial. Comput Med Imag Graph 35(7–8):493–495. Lorenzo C, Deniz O, Castrillon M, Guerra C. 2007. Comparison of focus measures in face detection environments. International Conference on informatics in Control, Automation and robotics. pp. 418–423. Lundin M, Szymas J, Linder E, Beck H, de Wilde P, van Krieken H, Garcıa-Rojo M, Moreno I, Ariza A, Tuzlali S, Dervisoglu S, Helin H, Lehto V, Lundin J. 2009. A European network for virtual microscopy–design, implementation and evaluation of performance. Virch Arch 454(4):421–429. Molnar B, Berczi L, Diczhazy C, Tagscherer A, Varga SV, Szende B, Tulassay Z. 2003. Digital slide and virtual microscopy based routine and telepathology evaluation of routine gastrointestinal biopsy specimens. J Clin Pathol 56(6):433–438. Molnar B, Berczi L, Ficsor F, Varga SV, Tagscherer A, Tulassay Z. 2009. Digital slide and virtual microscopy-based routine and telepathology evaluation of gastrointestinal biopsy specimen. In: Dunn SKB, editor. Telepathology, Chapter 2. New York: Springer. pp. 5– 18.

Microscopy Research and Technique

713

Nanda H, Cutler R. 2001. Practical calibrations for a real-time digital onmidirectional camera. Proceedings of the Computer Vision and Pattern Recognition Conference, CVPR 2001. Narvekar ND, Karam LJ. 2011. A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). IEEE Trans Image Process 20(9):2678–2683. Nayar SK, Nakagawa Y. 1994. Shape from focus. IEEE Trans Pattern Anal Mach Intell 16(8):824–831. Pressman RS. 2002. Software engineering. A practitioner’s approach. New York: McGraw-Hill. Redondo R, Bueno G, Crist obal G, Vidal J, D eniz O, Garcıa-Rojo M, Murillo C, Relea F, Gonz alez J. 2012a. Quality evaluation of microscopy and scanned histological images for diagnostic purposes. Micron 43:334–343. Redondo R, Nava GBJCVR, Crist obal G, D eniz O, Garcıa-Rojo M, Salido J, Fern andez MDM, Vidal J, Escalante-Ramırez B. 2012b. Autofocus evaluation for brightfield microscopy pathology. J Biomed Opt 17(3):036008-1–036008-8. TWAIN Group. 2000. Twain 1.9a Specification, The TWAIN Working Group. Vollath D. 1988. The influence of the scene parameters and of noise on the behaviour of automatic focusing algorithms. J Microsc 151: 133–146. Weber GF, Menko AS. 2005. Color image acquisition using a monochrome camera and standard fluorescence filter cubes. BioTechniques 38(1):52–56. Weinstein RS, Descour MR, Liang C, Bhattacharyya AK, Graham AR, Davis JR, Scott KM, Richter L, Krupinski EA, Szymus J, Kayser K, Dunn BE. 2001. Telepathology overview: From concept to implementation. Hum Pathol 32(12):1283–12899. Weinstein RS, Graham AR, Richter LC, Barker GP, Krupinski EA, Lopez AM, Erps KA, Bhattacharyya AK, Yagi Y, Gilbertson JR. 2009. Overview of telepathology, virtual microscopy, and whole slide imaging: Prospects for the future. Hum Pathol 40(8):1057– 1069. Wetzel AW, Gilbertson JR, Beckstead JEA, Feineigle PA, Hauser CR, Frank A, Palmieri J. 2004. System for microscopic digital montage imaging using a pulse light illumination system, United States Patent. Patent No.: US 6,798,571 B2. Xie H, Rong W, Sun L. 2007. Construction and evaluation of a wavelet-based focus measure for microscopy imaging. Microsc Res Techniq 70:987–995. Yeo T, Ong S, Jayasooriah SR. 1993. Autofocusing for tissue microscopy. Image Vision Comput 11:629–639.

An automated system for whole microscopic image acquisition and analysis.

The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology ...
1MB Sizes 2 Downloads 3 Views