Nonunified integral imaging elemental image array generation method based on selective pixel sampling algorithm Zhao-Long Xiong, Shu-Li Li, Jun Chen, Huan Deng, and Qiong-Hua Wang* School of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China *Corresponding author: [email protected] Received 3 December 2014; revised 10 February 2015; accepted 17 February 2015; posted 19 February 2015 (Doc. ID 228883); published 20 March 2015

We propose a method based on the selective pixel sampling algorithm to generate a nonunified integral imaging (II) elemental image array (EIA) with reduced moiré patterns at a low rendering cost and high three-dimensional (3D) resolution. In the proposed method, the redundant 3D information is captured for the nonunified pixel arrangement of elemental images, and the moiré patterns are constrained by the constraint equations. The nonunified EIA’s corresponding information is mapped from the obtained 3D information based on the selective pixel sampling algorithm. Appropriate experiments are carried out, and the experimental results show that the proposed method can increase the 3D display quality of the reconstructed 3D images in the II display and reduce rendering costs markedly in the generation of ultra-high-definition EIA. © 2015 Optical Society of America OCIS codes: (230.0230) Optical devices; (110.0110) Imaging systems. http://dx.doi.org/10.1364/AO.54.002532

1. Introduction

In recent years, integral imaging (II) has been one of the most attractive three-dimensional (3D) display techniques, which can produce full-parallax, fullcolor, and auto-stereoscopic 3D images without special viewing devices [1–3]. In the II display, a two-dimensional (2D) display device is coupled with a microlens array, which is composed of periodic square microlenses, hexagonal microlenses, or dual orthogonal microcylinder lens arrays [4–6]. Typically, the pitch of the periodic microlens may not be an integer multiple of the pixel size of the 2D display device, and this mismatch is the cause of moiré patterns that seriously degrade the quality of the reconstructed 3D images. So the microlens requires a certain rotation [7,8]. In this case, the microlens is tilted and the microlens edge intersects some pixels.

1559-128X/15/092532-05$15.00/0 © 2015 Optical Society of America 2532

APPLIED OPTICS / Vol. 54, No. 9 / 20 March 2015

With the rotation, the pixel arrangements of elemental images (EIs) can be nonunified or unified [7]. Many researchers focus on the generation of the nonunified elemental image array (EIA) in moiréreduced II displays. Some researchers use the multiple viewpoint rendering (MVR) method in the generation of EIA for a slanted fly’s eye lens [8]. However, this method ignores the different pixel arrangements for different positions in the tilted EIA. Another way to obtain the EIA is transforming the conventional EIA into the tilted EIA according to the optimal tilted angle, as we proposed previously [7]. In this method, different EIs must have a uniform effective pixel arrangement. And each EI is composed of border pixels and effective pixels. We rendered the conventional EI by the MVR method, and then mapped the pixels from the conventional EI to the uniform EI. Due to the uniformity of the EI’s pixel arrangement, different EIs can be calculated by the same algorithm. But the previously proposed pixel mapping may introduce distortions in the reconstructed 3D images, and a large volume

of rendering is also needed, because each EI needs one rendering pass. Besides, if the tilted angle of the microlens array makes sure that each EI has unified pixel arrangements, the moiré-reduced angle may not be the optimum value, which can introduce some moiré patterns. In this paper, we propose a nonunified EIA generation method for II display based on the selective pixel sampling algorithm (SPSA). The proposed method considers both the 3D display quality and the rendering cost of the nonunified EIA. The optimum moiré-reduced angle is chosen, and all pixels of the EIs are effective pixels. In the proposed method, parallax images of a 3D scene are captured by a certain camera array, and then the corresponding pixels are sampled selectively to generate the nonunified EIA. We introduce the method for the generation of ultra-high-definition (UHD) EIA, and the 3D resolution of the reconstructed images is increased and the rendering cost is reduced.

Tilted angle

1

θ1 θ2

p

4

2D display panel

2 3

EI

1 4

Micro-lens edge

2 3

Complete pixel arrangement

Fig. 2. Nonunified pixel arrangement of four adjacent tilted EIs.

patterns, the mismatch between the pixel size and the pitch of the periodic microlens must be constrained. The tilted pitch of the microlens should be an integer multiple of the pixel size of the 2D display device. In other words, taking the microlens pitch p and the size of pixels r into consideration, the tilted pitch p · cos θ1 and p · cos θ2 should be an integer multiple of the pixel size r. So the optimum moiré-reduced tilted angles θ1 and θ2 in the orthogonal directions are determined by the moiré-reduced constraint equations:  p θ1  arccos M·r 

2. Principle

In the proposed method, an EIA with nonunified pixel arrangement is generated. As shown in Fig. 1, the architecture of the proposed method is composed of four processes: the input and analysis process, which includes the parameter input and analysis of the parallax range; the pick-up process, which calculates the coordinates of the EIA’s pixels and captures the parallax images; the pixel sampling, which generates the nonunified EIA by the proposed SPSA; and the display process, which displays the nonunified EIA for the viewers. A.

r

Analysis of the Parallax Range and the Camera Array

In our proposed method, a 2D display panel is coupled with the tilted dual orthogonal microcylinder lens arrays. The microlens edge intersects some pixels, as shown in Fig. 2. For better display quality, we optimize the tilted angle of the microlens array into the optimum value. So the different EIs have nonunified pixel arrangements. In Fig. 2, four adjacent EIs are constrained by the tilted angles θ1 and θ2 . The right figure shows the arrangement of the complete pixels in the nonunified EIA for simplification. To avoid the cause of moiré

Input and analysis

II display spec. input -Lens array, 2D display, Gap and pitch

Analysis of the

Pick-up

Calculation of the pixel coordinates

Capture of the parallax images

 θ2  − arccos

 p N·r

M  1; 2; 3…;

(1)

N  1; 2; 3…;

(2)

tan θ1 · tan θ2  −1  γ;

(3)

where M and N are the numbers of pixels under each microlens, and γ is the constraint factor, which represents the relationship between the adjacent edges of microlenses. If γ  0, the microlenses are square; otherwise the microlenses may be a rhombus or a hexagon. With the determined angles θ1 and θ2 , the parallax range captured by the camera array can be analyzed, as shown in Fig. 3. In the proposed method, we set a virtual camera array to pick up the 3D information, and each camera has an orthographic geometry. The capture does not require one-to-one correspondence between the EIs’ pixels and the orthographic cameras, which is different from the viewpoint vector rendering (VVR) method [9,10]. The virtual camera array is arranged based on the coverage of the microlens and the moiré-reduced angles θ1 and θ2 . The distance d between the adjacent cameras can be determined by

parallax range

Pixel sampling

Generation of the non-unified EIA

Display

Display of the non-unified EIA

Fig. 1. Architecture of the proposed method.

Fig. 3. Schematic of the capture for parallax images in the proposed method. 20 March 2015 / Vol. 54, No. 9 / APPLIED OPTICS

2533

r dD· ; g

(4)

where D is the distance between the camera array and the central depth plane, and g is the gap between the EIA and the microlens array. The camera array in the θ1 direction includes M cameras, and in the θ2 direction it includes N cameras. The camera array obtains redundant information, and then the nonunified EI sample in the orthographic projection images. B.

Pixel Coordinates Calculation and SPSA

In the proposed method, the camera array is no longer arranged according to the pixel coordinates in the EIA, but the coverage of the microlens. So the pixel coordinates should be transformed and recalculated in the pixel sampling process. As shown in Fig. 4, a pixel in the EIA’s x–y plane is denoted as Ix; y, which belongs to a certain EI. And in the EI’s coordinates xe –ye the pixel can be denoted as I m;n i; j, where (m, n) represents the location of the EI in the EIA. The transformation from Ix; y to I m;n i; j can be deduced as i  modroundx  y · tan θ1 ; M;

(5)

j  modroundy  x · tan θ2 ; N;

(6)

  x  y · tan θ1 ; m  round M

(7)



 x  y · tan θ2 n  round : N

(8)

In the proposed method, the resolution of the EIA is W × H, and according to Nyquist sampling theory the resolution of each orthographic projection image is at least Rw × Rh pixels. Rw × Rh can be deduced as   W · cos θ1 · r ; Rw  ceil 2 · p

(9)

 H · cos θ2 · r Rh  ceil 2 · : p

(10)



The SPSA selectively samples the pixels in the orthographic projection images to generate the

Fig. 4. Schematic of the parallax image capture in SPSA. 2534

APPLIED OPTICS / Vol. 54, No. 9 / 20 March 2015

nonunified EIA. The pixel I 0i0 ;j0 m0 ; n0  in the orthographic projection images is sampled by Ix; y  I m;n i; j  I 0i0 ;j0 m0 ; n0 ;

(11)

where m0 , n0 , i0 , and j0 can be obtained by   Rw · p m  round ·m ; W 0

n0  round

  Rh · p ·n ; H

(12)

(13)

i0  M − i − 1;

(14)

j0  N − j − 1;

(15)

where i0 ; j0  are the coordinates of the orthographic projection images, and m0 ; n0  are the pixel coordinates in the orthographic projection images. In the proposed method, the redundant information for the nonunified EIA is captured, and the most optimal pixel is sampled to generate the nonunified EIA. In this EIA generation method, not only the complete pixels in EIs, but also the border pixels, as the effective pixels, make contributions to the reconstruction of the 3D images. So the nonuniformity cannot affect the information correctness of the II display, and the quality of the reconstructed 3D images is increased. Besides, compared to the previous method the rendering passes are greatly reduced. This is because each orthographic projection image, not each EI, needs one rendering pass. These are the main differences between the unified EIA generation method and the nonunified EIA generation method based on SPSA proposed in this paper. 3. Experiments

To show the effectiveness of the proposed method, we have implemented experiments to compare the reconstructed 3D images’ qualities and rendering costs using the previous method [7] and the method proposed in this paper. In the experiments, we build a 3D model for a real man by fusing the information captured from a Kinect. Then, the virtual models are added in the scene, so this method can be applied for the real scene, as shown in Fig. 5. The virtual camera array is arranged in the front of the 3D models, and the central depth plane is located at the center of the 3D scene. The distance D between the camera array and the central depth plane is 230 mm, and the adjacent camera distance is d  9.32 mm. In our experiments, we use an UHD (3840 × 2160) liquid crystal display (LCD) panel to display the EIA. A rhombic-type microlens array has been embedded in the LCD panel to reconstruct the 3D images, as shown in Fig. 6. The parameters of our experimental II display are given in Table 1. With the microlens

Fig. 5. 3D models used in this experiment.

corresponding information are mapped as effective pixels. In the previous method, we need 96,476 cameras to generate the corresponding 96,476 EIs for the EIA and just 72 pixels in each EI effective pixel. The rendering passes are reduced from 96,476 to 169, so the proposed method has a low rendering cost. Besides, in the proposed method, more pixels are efficient to reconstruct the 3D image. The EIAs and reconstructed 3D images using different generation methods are shown in Fig. 7. We can observe that some details of the reconstructed 3D images by the previous method are blurred and some edges are distorted. However, in the proposed method, the reconstructed 3D images have clearer edges and no distortion. The reconstructed viewing images from different directions by the proposed method are shown in Fig. 8.

Fig. 6. Experimental setup for the 3D image reconstruction.

pitch p  1.27 mm and pixel size r  0.137 mm, the optimal moiré-reduced angles can be obtained by Eqs. (1)–(3), and θ1  44.5°, θ2  −44.5°, M  13, and N  13. In our experiments, the nonunified EIA is generated by the proposed method and the unified EIA is generated by our previous method [7]. Table 2 shows the characteristics of the different methods. The proposed method has the optimum moiréreduced angles, and the previous method selects the tilted angles for the unified EIA. For the different methods, the rendering costs are different. In the proposed method, 169 cameras are needed to generate the 96,476 EIs, and all pixels with the

Table 1.

Configuration Parameters of the Proposed II Display

LCD panel

Microlens array

Display mode 3D display

Table 2.

Fig. 7. Generated EIAs and reconstructed 3D images: (a) EIA using the proposed method, (b) EIA using the previous method, (c) 3D image using the proposed method, and (d) 3D image using the previous method.

Resolution Screen size Pixel size Pitch Focal length Total number Gap 3D resolution

3840H × 2160V 525.96 mm × 295.85 mm 0.137 mm × 0.137 mm 1.27 mm 3.38 mm 96,476 3.38 mm 414H × 233V

Characteristics of Different EIA Generation Methods

Characteristic Pixel arrangement Tilted angle Cameras’ number EI’s effective pixels Camera resolution

Proposed Method

Previous Method

Nonunified 44.5°∕ − 44.5° 169 85 565 × 318 pixels

Unified 27.6°∕ − 61.4° 96,476 72 10 × 10 pixels

Fig. 8. Different views of the reconstructed 3D images using the proposed method: (a) top view, (b) left view, (c) middle view, (d) right view, and (e) bottom view. 20 March 2015 / Vol. 54, No. 9 / APPLIED OPTICS

2535

4. Conclusions

A nonunified EIA generation method for II display has been proposed. In the proposed method, the camera array arrangement is not affected by the nonuniformity of the pixel arrangement of EIs. The corresponding information of the EI’s pixel is sampled from the redundant captured parallax images, and the border pixels are also the effective pixels. Compared with our previous method, the rendering cost has been greatly reduced in the UHD EIA generation, and the 3D display quality has been improved by the proposed method. This work is supported by the “973” Program under Grant No. 2013CB328802, the NSFC under Grant Nos. 61225022 and 61320106015, and the “863” Program under Grant No. 2012AA011901. References 1. A. Stern and B. Javidi, “Three dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94, 591–607 (2006). 2. J. H. Park, K. Hong, and B. Lee, “Recent progress in threedimensional information processing based on integral imaging,” Appl. Opt. 48, H77–H94 (2009).

2536

APPLIED OPTICS / Vol. 54, No. 9 / 20 March 2015

3. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52, 546–560 (2013). 4. Y. Kim, G. Park, S. W. Cho, J. H. Jung, B. Lee, Y. Choi, and M. G. Lee, “Integral imaging with reduced color moiré pattern by using a slanted lens array,” Proc. SPIE 6803, 68031L (2008). 5. S. H. Jiao, X. G. Wang, M. C. Zhou, W. M. Li, T. Hong, D. K. Nam, J. H. Lee, E. H. Wu, H. T. Wang, and J. Y. Kim, “Multiple ray cluster rendering for interactive integral imaging system,” Opt. Express 21, 10070–10086 (2013). 6. M. Okui, M. Kobayashi, J. Arai, and F. Okano, “Moiré fringe reduction by optical filters in integral three dimensional imaging on a color flat-panel display,” Appl. Opt. 44, 4475–4483 (2005). 7. C. C. Ji, C. G. Luo, H. Deng, D. H. Li, and Q. H. Wang, “Tilted elemental image array generation method for moiré-reduced computer generated integral imaging display,” Opt. Express 21, 19816–19824 (2013). 8. K. Yanaka and K. Uehira, “Extended fractional view integral imaging using slanted fly’s eye lens,” SID Symp. Dig. Tech. Pap. 42, 1124–1127 (2011). 9. Z. L. Xiong, Q. H. Wang, S. L. Li, H. Deng, and C. C. Ji, “Partially-overlapped viewing zone based integral imaging system with super wide viewing angle,” Opt. Express 22, 22268–22277 (2014). 10. K. S. Park, S. W. Min, and Y. Cho, “Viewpoint vector rendering for efficient elemental image generation,” IEICE Trans. Inf. Syst. E 90-D, 233–241 (2007).

Nonunified integral imaging elemental image array generation method based on selective pixel sampling algorithm.

We propose a method based on the selective pixel sampling algorithm to generate a nonunified integral imaging (II) elemental image array (EIA) with re...
603KB Sizes 4 Downloads 27 Views