Real-time interactive display for integral imaging microscopy Ki-Chul Kwon,1 Ji-Seong Jeong,2 Munkh-Uchral Erdenebat,1 Young-Tae Lim,1 Kwan-Hee Yoo,2,4 and Nam Kim1,3 1

School of Information and Communication Engineering, Chungbuk National University, 52 Naesudong-ro, Heungdeok-gu, Cheongju, Chungbuk 361-763, South Korea

2

Department of Informatics and Convergence, Chungbuk National University, 52 Naesudong-ro, Heungdeok-gu, Cheongju, Chungbuk 361-763, South Korea 3

e-mail: [email protected] 4

e-mail: [email protected]

Received 26 March 2014; revised 16 May 2014; accepted 29 May 2014; posted 3 June 2014 (Doc. ID 208881); published 4 July 2014

A real-time interactive orthographic-view image display of integral imaging (II) microscopy that includes the generation of intermediate-view elemental images (IVEIs) for resolution enhancement is proposed. Unlike the conventional II microscopes, parallel processing through a graphics processing unit is required for real-time display that generates the IVEIs and interactive orthographic-view images in high speed, according to the user interactive input. The real-time directional-view display for the specimen for which 3D information is acquired through II microscopy is successfully demonstrated by using resolution-enhanced elemental image arrays. A user interactive feature is also satisfied in the proposed realtime interactive display for II microscopy. © 2014 Optical Society of America OCIS codes: (100.0100) Image processing; (100.6890) Three-dimensional image processing; (180.0180) Microscopy; (180.6900) Three-dimensional microscopy; (120.0120) Instrumentation, measurement, and metrology; (120.2040) Displays. http://dx.doi.org/10.1364/AO.53.004450

1. Introduction

The optical microscope is the most widely used microscope in various fields to observe specimens, but the obtained information through the microscope via an image at once is not 3D but rather 2D. A 3D microscopic system based on the stereoscopic 3D technique has been proposed recently; however, the stereoscopic microscopy has significant loss of depth and width [1]. The integral imaging (II) technique that was proposed by Lippmann in 1908 can obtain 3D image information at one time in an optical microscope [2]. II has been regarded as one of the most suitable techniques for next-generation 3D 1559-128X/14/204450-10$15.00/0 © 2014 Optical Society of America 4450

APPLIED OPTICS / Vol. 53, No. 20 / 10 July 2014

display because it can provide full-parallax and continuous-viewing 3D images, but it attracted limited attention due to the lack of devices. The method has received renewed interest in recent years thanks to the use of high-resolution electronics devices [3–6]. The main difference between the usual stereoscopic imaging configuration and the II pickup system is that the stereo matching utilizes several cameras to capture the perspectives of the object, while the II pickup system incorporates a lens array and a single camera. Therefore, an II pickup system provides more perspectives than the conventional stereoscopic imaging system—as many as the number of elemental lenses used. Since all the perspectives are captu red by a single camera, however, the resolution of each perspective is quite low, so it is necessary to enhance the image resolution in the II microscope

(IIM). Many studies to improve II resolution have been reported; for example, a 3D camera system employing a scanning microlens array (MLA) and a stationary pinhole array [7], a synchronously moving lens array [8], and 3D object sensing and recognition using time-multiplexed computational II [9]. Also, an IIM system using a MLA was first introduced by Levoy et al., who also conducted studies for volume reconstruction of biological specimens, synthetic focusing via 3D deconvolution, and so on [10,11]. Recently, a resolution enhancement method using lens shifting in MLA-based IIM was proposed by Lim et al. [12]. However, the results of these methods are difficult to apply to real-time display of an IIM because of the accuracy and the time needed for mechanical movement of the MLA. The resolution of the IIM is fundamentally determined by the spatial density of the elemental lenses. Each elemental lens captures the angular distribution of the light rays at its principal point. In orthographic-view image reconstruction, the number of elemental lenses directly determines the pixel count of the reconstructed image. Generally, the pixel count and the spatial sampling interval of the reconstructed image are not satisfactory. Real-time display based on light field microscopy has been recently conducted; however, the system only focused on real-time computation. Due to the use of a lens array in the display, pseudoscopic-toorthoscopic correction is required, and it affects the processing time [13]. In this study, we propose a complex real-time interpolation for enhanced resolution and a wellreconstructed interactive orthographic-view image display for the MLA-based IIM, which includes generation of intermediate-view elemental images (IVEIs) between the neighboring elemental images (EIs) via the orthographic-view image that corresponds to user-requirement view directions coordinated by keyboard or mouse. To provide a real-time display of orthographic-view image by the user interactive input in the proposed IIM, all operations are performed under open computing language (OpenCL) for graphics processing unit (GPU) parallel processing [14,15]. 2. Optical Structure of the IIM and Directional View Display

The basic structure of the IIM, which is based on an infinity-corrected optical system, consists of the objective lens, tube lens, MLA, and image sensor [10–12]. Figure 1 shows the optical structure of the IIM. If the microscope is focused, the distance from the specimen to the first principal plane of the objective lens and from its second principal plane to the telecentric stop are both equal to the focal length f OL of the objective lens. And the distance from the second principal plane of the tube lens to the EIs plane is f TL , the focal length of the tube lens. This is the so-called infinite section of the microscope. The specimen is imaged by an objective lens

through a telecentric stop and a tube lens onto the EIs plane. A MLA forms multiple subimages on the sensor plane, which are the images of the captured EI array (EIA) that include the depth and coordinate information of the specimen. As shown in Fig. 1(b), when EIs for the multiple objects, or object points, are captured through the MLA, the gap between pixels becomes different (farther away or closer) according to the viewing direction, and orthographic views can be provided according to the viewing directions from the reconstruction. 3. Real-Time Interactive Display for IIM A. System Configuration of Real-Time IIM

Figure 2 shows a schematic configuration of the proposed prototype IIM system. The system is composed of the optical devices to acquire the EIAs, PC-based data processing of real-time user interactive orthographic-view reconstruction, and a display device, for example, a liquid crystal display (LCD). PC-based data processing performs realignment and sets the region of interest (ROI) for the acquired image, calculates the IVEIs, creates the resolution-enhanced EIAs (EEIAs) including the IVEIs, and generates and displays the directional-view image according to the user interaction. The initial setting values include the inputted information, such as the number of elemental lenses (ELw , ELh ) and the resolution of the entire EIA (EIAw , EIAh ). The view-direction vectors for the orthographic-view image plane V x ; V y  are determined by interactive devices, such as a mouse or keyboard. As shown in Fig. 3, the preprocessing part performs the detection of rotation information in the initially acquired EIAs to generate the precise orthographic-view images, realignment, and set of the ROI. The detecting process of the rotation information calculates line segment information [16]. The realignment is the process whereby the vertical and horizontal factors in the EIs are realigned by multiplying the inverse matrix of the rotation, which is created using inclination information for the detected line segment. The reason the software realignment process is required is that the distortion problem can occur in the acquisition, and this distortion will be visible due to the enlargement of the image, which is displayed on the display device. Therefore, the realignment process is required. The last process of the preprocessing portion is the setting of ROI for an efficient multiview image in the realigned matrix. The proposed passing through each lens center using the Hough transform system can adjust ROI by user keyboard or mouse input. The preprocessing segment is implemented through the CPU, as it runs only once after the input part. When the realignment is completed through the preprocessing part, the IVEIs for resolution enhancement by using interpolation and the orthographic-view images are generated via OpenCL GPU parallel processing for real-time user-wanted directionalview image display. In particular, the generation of 10 July 2014 / Vol. 53, No. 20 / APPLIED OPTICS

4451

Fig. 1. (a) Basic optical structure of the IIM and (b) concept of the orthographic views.

the IVEIs and orthographic-view images is accomplished with GPU parallel processing to reduce the processing time, by using an OpenCL that is similar to previous high-speed image space parallel processing methods for a computer-generated II system [17–19]. B.

Real-time Computation Using OpenCL

OpenCL is a framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, digital signal processing, fieldprogrammable gate arrays, and other processors [14,15]. It provides parallel computing using taskbased and data-based parallelism. Generation of the IVEIs in the proposed system applies GPU parallel processing via OpenCL. A GPU has the suitable structure for image pixel unit operations because of its many arithmetic units. Two kinds of GPU parallel processing using OpenCL required in the proposed IIM system for the real-time user interactional display, as first, one is generating IVEIs to enhance the resolution 4452

APPLIED OPTICS / Vol. 53, No. 20 / 10 July 2014

of the displayed images, and second, one is generating the orthographic-view images. The IVEI generation method makes it possible to digitally synthesize a number of IVEIs, as many as are required using only the optically picked-up EIs of a 3D object [20–22]. In the proposed prototype, the interpolation method to generate IVEIs has been applied instead of using disparity information, due to the disparity values in between neighboring EIs being very small and the long processing time required due to the large amount of computation. Figure 4 shows an example of an IVEI generation process between two EIs by using the interpolation technique. In the proposed IIM system, first, the threads are generated that match the number of elemental lenses, and the pixel information for every EI taken through each elemental lens is stored in the GPU memory. For each EI EIi; j, the three IVEIs (horizontal, vertical, and diagonal direction) are generated by using Eq. (1), whose computation is done by i; jth thread:

Fig. 2. Schematic diagram of the proposed real-time IIM display system: acquisition, calculation of the IVEIs and orthographic-view images, and display.

8 EIi;jEIi1;j > 2 < IVEIh i; j  ; 1 IVEIv i; j  EIi;jEIi;j1 2 > : EIi;jEIi1;jEIi;j1EIi1;j1 IVEId i; j  4 where ELw > i ≥ 0 and ELh > j ≥ 0. The acquired EIAs and the generated IVEIs are shown in Fig. 5. After the IVEIs are generated, the orthographic-view images are reconstructed by using the viewing direction of the observer according to the view vector by user interaction. The newly generated EEIAs included the information at all the same view directions. For this reason, pixel information of the

acquired EIs is required in the generation of orthographic-view images for each viewpoint. However, the real-time orthographic-view image display is difficult to match up with the speed of the orthographic-view image generation. Figure 6 shows the scheme of the orthographicview image generation and display. For the previously generated EEIAs, the threads are generated in the GPU for each pixel, then a directional-view image is generated simultaneously for each thread and displayed on the display device, as shown in Fig. 6(a). Here, all pixel information of the EEIA is loaded in GPU memory first for the creation of the threads. 10 July 2014 / Vol. 53, No. 20 / APPLIED OPTICS

4453

Fig. 3. Examples of preprocessing (a) an acquired image before realignment, (b) the realigned EIs by using the Hough transform, and (c) the ROI setting and image extracted from the ROI.

Fig. 4. Example of the IVEI generation for (upper) captured images, and (lower) generated images.

The threads inside the GPU generate most of the EEIA resolution for the parallel processing, and the generated threads correspond to the pixels of the EIs. Then the view-direction information is detected through the user interactive input devices. The number of view directions for each EI is the same resolution as the EIA, and the resolution of the generated orthographic-view image is the same as the

number of elemental lenses (2 ELw − 1, 2 ELh − 1) for the corresponding EEIAs. The view-direction vector V x ; V y  to generate the orthographic-view images according to user interactive input information is given by Eq. (2): Vx 

MPx × EEIw ; 2 ELw − 1

Fig. 5. Schematic diagram to generate IVEIs. 4454

APPLIED OPTICS / Vol. 53, No. 20 / 10 July 2014

Vy 

MPy × EEIh ; 2 ELh − 1

(2)

Fig. 6. Schematic diagram for generating and displaying the orthographic-view images. (a) For the previously generated EEIAs, data from every pixel are stored in the GPU memory, and the threads are created for each pixel of EEIA to generate the orthographic-view images. The generated directional-view images for the corresponding detected user view directions are displayed on the display device in real time. (b) The generating and displaying processes of the orthographic-view images are illustrated in detail, inside the GPU computation.

Fig. 7. Experimental setup: (a) hardware setup and (b) image processing software. 10 July 2014 / Vol. 53, No. 20 / APPLIED OPTICS

4455

Table 1.

Devices Objective lens

Video microscope unit Lens array

Camera

Processor

Specifications for the Proposed Prototype IIM System

Indices

Specifications

Magnification Focal length Numerical aperture Tube lens magnification Illumination Number of elemental lenses Elemental lens diameter Focal length Sensor Number of pixels Maximum frame rate Interface OS OpenCL core

10× 20 mm 0.28 10× Telecentric reflective 100 × 100 circular lenses (ROI 60 × 60) 125 μm 2.4 mm 100 CMOS, 5.5 μm 2048 × 2048 RGB 90 fps USB 3.0 Windows 7 (64 bit), 3.0 GHz CPU, 4 GB RAM 265 ea., 2 GB memory, core speed 950 MHz

where the MPx and MPy is the coordinate information of the user interaction, such as a mouse or keyboard, EEIw and EEIh are the width and height of each EI, and V x and V y are the horizontal and vertical view direction vectors from the user requirements, respectively. The GPU calculates the size of each EI by using the size of the entire EIA and the size of the lens array. When the user request is inputted and calculations for V x and V y are completed, all of the V x ; V y th pixels are assembled as a single directional-view image, as shown in Fig. 6(b). Finally, the generated directional-view

images are displayed on the display device, according to the view-direction information. 4. Experimental Results

The experimental setup of the proposed prototype IIM consists of an objective lens at 10× magnification, a video microscope unit with in-line illumination, a MLA, and a 4 megapixel camera connected to a PC. The experimental setup and its schematics are shown in Fig. 7. The image on the LCD in Fig. 7(a) is the sample of the displayed directionalview image, which is realigned and resolution

Fig. 8. Experiment results: (a) a 3D real object on bonding area of the CMOS sensor chip, (b) the initially acquired EIA via the MLA, (c) the EEIA that included the generated IVEIs for the initially captured EIs, and (d) the reconstructed orthographic-view images. Each image has 119 × 119 pixels and the number of the images is 25 × 25. 4456

APPLIED OPTICS / Vol. 53, No. 20 / 10 July 2014

Table 2.

Processing Speed Measurements of the Proposed Method and Conventional Method

Generating Methods Processing Orders Acquisition stage Preprocessing (executes at once) Resolution enhancing Generation orthographic view Total processing time

Processing Time (ms) (Using OpenCL)

Processing Time (ms) (without Using OpenCL)

11 (99)

11 (99)

50 8

36825 24

69 (15 fps)

36860 (0.027 fps)

enhanced for the desired user view direction. Table 1 shows the specifications for the proposed prototype IIM system. In the proposed prototype IIM, the size of the acquired initial EIAs obtained through a camera is 2048 × 2048 pixels∕90 frames per second (fps), formed by the MLA and relayed by a 1:1 macrolens and captured by the camera. The reordered image size in the preprocessing part is 1500 × 1500 pixels and the EEIA is 2975 × 2975 pixels. Figure 8(a)

shows a 2D image of the object captured without the MLA for complementary metal oxide semiconductor (CMOS) image sensor wafer wire (φ  15 μm) bonding areas, and Fig. 8(b) shows an initially captured EIA that consists of 60 × 60 EIs, each with a 25 × 25 pixel count. Figure 8(c) is an EEIA that has a 25 × 25 pixel count for 119 × 119 EIs. Figure 8(d) is a set of the orthographic-view images reconstructed from the EEIA illustrated in Fig. 8(c). The processing speed is measured according to the processing order shown in Table 2. The display frame rate is more than 15 fps for 25 × 25 orthographicview images, where the size of the selected region of the initially captured EIA is 1500 × 1500 pixels, the entire EEIA size is 2975 × 2975 pixels after, and the directional-view image size is 119 × 119 pixels. It is possible to show the real-time display according to user interaction. Another example of the displayed orthographicview images and videos is presented in Fig. 9 in two cases: using EEIA and without using resolution enhancement. Here, a dayfly used as the experimental object is shown in Fig. 9(a), and Fig. 9(b) shows an

Fig. 9. Real-time interactive orthographic-view image display for the proposed prototype IIM. (a) For the given object, (b) the initial EIs and directly reconstructed 3D images without resolution enhancement from multiple viewing direction (Media 1), and (c) EEIA with reconstructed images from multiple viewing directions (Media 2). The display refresh rate was 250 Hz, and 95 Hz in the case of (c). 10 July 2014 / Vol. 53, No. 20 / APPLIED OPTICS

4457

example of the initially acquired EIA and the orthographic-view images generated directly from the initially acquired EIA. Finally, Fig. 9(c) shows the EEIA for the dayfly and the resolution-enhanced orthographic-view images. The right images of Figs. 9(b) and 9(c) are the sample images of the displayed orthographic-view images from the various viewing directions. The numbers marked on the sample reconstructed images are the orders of the corresponding directional-view images.

References

5. Conclusion

A complex prototype real-time orthographic-view image display based on IIM for corresponding user-requirement view-direction coordinates has been proposed. The proposed system is composed of optical devices for acquiring the EIA, PC-based data processing for real-time user-interactive orthographic-view reconstruction, and a display device. Applying GPU parallel processing using an OpenCL to reconstruct an orthographic-view image and generating the IVEIs enables real-time computation for display of view-direction images in acquiring images through the IIM. The experimental results showed the real-time display for a directional-view image according to user interaction input. Here, the resolution of the entire orthographic-view image has been enhanced well, four times (two times for the vertical and two times for the horizontal directions), and its real-time illumination for the IIM has been provided. Due to the very small disparity between the neighboring EIs, the occlusion is not a big problem in the proposed real-time IIM display system. Also, the lens array does not require the display part, the pseudoscopic conversion problem does not occur, and the optical mode of the MLA-based II technique is not important in the proposed IIM system. However, when resolution is enhanced 16 times (four times for the vertical and four times for the horizontal directions), the quality of the generated orthographic-view images is degraded and requires long processing time. Further study will be focused on the improvement of the viewpoints by using disparity information. Appendix A EEIA EEIw , EEIh EI EIA EL II IIM IVEI MLA MPx , MPy ROI V x, V y

Resolution-enhanced elemental image array Each elemental image size of EEIA Elemental image Elemental image array Elemental lens Integral imaging Integral imaging microscope Intermediate-view elemental image Microlens array Mouse or keyboard coordinate information Region of interest View direction vectors for orthographic view

This research was financially supported by the “Electronic medical equipment part & material 4458

industrialization foundation construction program” through the Ministry of Trade, Industry & Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT), and supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the ITRC (Information Technology Research Center) support program (NIPA-2014H0301-14-1022) supervised by the NIPA (National IT Industry Promotion Agency).

APPLIED OPTICS / Vol. 53, No. 20 / 10 July 2014

1. K.-C. Kwon, Y.-T. Lim, N. Kim, K.-H. Yoo, J.-M. Hong, and G.-C. Park, “High-definition 3D stereoscopic microscope display system for biomedical applications,” EURASIP J. Image Video Process. 2010, 1–8 (2010). 2. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908). 3. J.-H. Park, K.-H. Hong, and B.-H. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009). 4. G. Li, K.-C. Kwon, G.-H. Shin, J.-S. Jeong, K.-H. Yoo, and N. Kim, “Simplified integral imaging pickup method for real objects using a depth camera,” J. Opt. Soc. Korea 16, 381–385 (2012). 5. N. Kim, A.-H. Phan, M.-U. Erdenebat, A. M. Alam, K.-C. Kwon, M.-L. Piao, and J.-H. Lee, “3D display technology,” Disp. Imag. Technol. 1, 73–95 (2013). 6. J. Kim, J.-H. Jung, C. Jang, and B. Lee, “Real-time capturing and 3D visualization method based on integral imaging,” Opt. Express 21, 18742–18753 (2013). 7. L. Erdmann and K. J. Gabriel, “High-resolution digital integral photography by use of a scanning microlens array,” Appl. Opt. 40, 5592–5599 (2001). 8. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). 9. S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003). 10. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” Trans. Graph. 25, 924–934 (2006). 11. M. Levoy, Z. Zhang, and I. Mcdowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235, 144–162 (2009). 12. Y.-T. Lim, J.-H. Park, K.-C. Kwon, and N. Kim, “Resolutionenhanced integral imaging microscopy that uses lens array shifting,” Opt. Express 17, 19253–19263 (2009). 13. B. Lee and J. Kim, “Real-time 3D capturing-visualization conversion for light field microscopy,” Proc. SPIE 8769, 876908 (2013). 14. NVIDIA, “OpenCL programming guide for the CUDA architecture,” Version 2.3 (2009). 15. NVIDIA, “CUDA C programming guide,” Version 3.1.1 (2010). 16. D. H. Ballard, “Generalizing the Hough transform to detect arbitrary shapes,” Pattern Recogn. 13, 111–122 (1981). 17. K.-C. Kwon, C. Park, M.-U. Erdenebat, J.-S. Jeong, J.-H. Choi, N. Kim, J.-H. Park, Y.-T. Lim, and K.-H. Yoo, “High speed image space parallel processing for computer generated integral imaging system,” Opt. Express 20, 732–740 (2012). 18. D.-H. Kim, M.-U. Erdenebat, K.-C. Kwon, J.-S. Jeong, J.-W. Lee, K.-A. Kim, N. Kim, and K.-H. Yoo, “Real-time 3D display system based on computer-generated integral imaging technique using enhanced ISPP for hexagonal lens array,” Appl. Opt. 52, 8411–8418 (2013). 19. J.-S. Jeong, K.-C. Kwon, M.-U. Erdenebat, Y. Piao, N. Kim, and K.-H. Yoo, “Development of a real-time integral imaging display system based on graphics processing unit parallel processing using a depth camera,” Opt. Eng. 53, 015103 (2014).

20. K.-H. Bae and E.-S. Kim, “New disparity estimation scheme based on adaptive matching windows for intermediate view reconstruction,” Opt. Eng. 42, 1778–1786 (2003). 21. D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral

imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45, 4631–4637 (2006). 22. S.-C. Kim, C.-K. Kim, and E.-S. Kim, “Depth-of-focus and resolution-enhanced three-dimensional integral imaging with non-uniform lenslets and intermediate-view reconstruction technique,” 3D Res. 2, 1–9 (2011).

10 July 2014 / Vol. 53, No. 20 / APPLIED OPTICS

4459

Real-time interactive display for integral imaging microscopy.

A real-time interactive orthographic-view image display of integral imaging (II) microscopy that includes the generation of intermediate-view elementa...
2MB Sizes 0 Downloads 5 Views