sensors Article

Shape Reconstruction Based on a New Blurring Model at the Micro/Nanometer Scale Yangjie Wei 1,2, *, Chengdong Wu 3 and Wenxue Wang 2 1 2 3

*

College of Computer Science and Engineering, Northeastern University, Shenyang 110819, China State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang 110014, China; [email protected] College of Information Science and Engineering, Northeastern University, Shenyang 110819, China; [email protected] Correspondence: [email protected]; Tel.: +86-24-8368-7760; Fax: +86-24-2389-3138

Academic Editor: Ning Xi Received: 1 December 2015; Accepted: 18 February 2016; Published: 27 February 2016

Abstract: Real-time observation of three-dimensional (3D) information has great significance in nanotechnology. However, normal nanometer scale observation techniques, including transmission electron microscopy (TEM), and scanning probe microscopy (SPM), have some problems to obtain 3D information because they lack non-destructive, intuitive, and fast imaging ability under normal conditions, and optical methods have not widely used in micro/nanometer shape reconstruction due to the practical requirements and the imaging limitations in micro/nano manipulation. In this paper, a high resolution shape reconstruction method based on a new optical blurring model is proposed. Firstly, the heat diffusion physics equation is analyzed and the optical diffraction model is modified to directly explain the basic principles of image blurring resulting from depth variation. Secondly, a blurring imaging model is proposed based on curve fitting of a 4th order polynomial curve. The heat diffusion equations combined with the blurring imaging are introduced, and their solution is transformed into a dynamic optimization problem. Finally, the experiments with a standard nanogrid, an atomic force microscopy (AFM) cantilever and a microlens have been conducted. The experiments prove that the proposed method can reconstruct 3D shapes at the micro/nanometer scale, and the minimal reconstruction error is 3 nm. Keywords: blurred imaging model; 3D shape; heat diffusion; micro/nanometer scale reconstruction

1. Introduction Nowadays, micro/nanometer scale observation, as an enabling technology in nanotechnology, is important for researchers to understand the shapes, characteristics, and interactions between two objects during micro/nano manipulation [1–4]. Therefore, observation techniques at the micro/nanometer scale, especially three-dimensional (3D) and non-destructive techniques, have great significance to explore new manipulation methods, to develop new system testing techniques, and to model dynamics and kinematics [5]. The normal tools used in micro/nanometer observation include transmission electron microscopy (TEM), scanning electron microscopy (SEM), scanning probe microscopy (SPM), scanning tunneling microscopy (STM), and optical microscopy. SEM, as an electron microscope, uses a focused beam of electrons to scan a sample, and the resolution of its images is less than 1 nm. However, it has to work under vacuum conditions. Besides, the size of its samples must fit in the specimen chamber; TEM is capable of 3D imaging and can observe the objects with sub-nanometer resolution. It is necessary to carry out scanning imaging, which is characterized by high adaptability and low cost. However, sample

Sensors 2016, 16, 302; doi:10.3390/s16030302

www.mdpi.com/journal/sensors

Sensors 2016, 16, 302

2 of 17

preparation for a TEM system is a complex procedure, because the thickness of the specimens has to be on the order of hundreds of nanometers; SPM is a microscopy technique that produces images of surfaces using a physical probe to scan the sample. The samples scanned by SPM could be in air at standard temperature and pressure or submerged in a liquid reaction environment. However, due to the scanning process, the scanning speed of SPM is generally slower in acquiring images, as is STM. Compared to them, the requirements of optical microscopes, including manipulation environment and equipment cost, are all lower than those of the previous microscopes [6]. Besides, benefitting from their real-time and non-destructive imaging ability, it is possible to use optical microscopes to achieve real-time vision feedback and to improve manipulation precision in micro/nano technology [7,8]. However, most recent optical microscopes cannot obtain a 3D image directly, and the 3D reconstruction methods based on optical microscopes at the macro scale, including depth from focus (DFF), depth from stereo (DFS), and depth from defocus (DFD), have some problems in reconstruction at the micro/nanometer scale [9]. For example: (1) the manipulation space in a nanomanipulation is too limited to set up two optical microscopes for observing an object at the same time, while DFS and the traditional DFD estimate depth information use images obtained with two or more cameras at different positions, with different orientations or with different camera parameters, of the same scene [10–12]; (2) For an optical microscope, the depth of field is inversely proportional to its resolution. Therefore, the depth of field of a high resolution microscopy is very short, and it is difficult to capture a sequence of images with different depth information, as required by DFF. Besides, to capture a sequence of images is time-consuming, therefore, DFF is difficult to use in real-time applications [13]; (3) The imaging property of a high resolution microscopy is too complicated to be described by the principles of light-traveling at the macro scale, and the traditional geometrical optics-based techniques have not considered the distribution property of intensity in high resolution microscopy. However, the intensity distribution is the theoretical base of optical imaging. Lacking an understanding of the intensity distribution, it is difficult to describe the imaging process and to improve the resolution of reconstruction theoretically. Therefore, in order to reconstruct the 3D shape of an object at the micro/nanometer scale, a method that can relate intensity distribution of a source point and depth variation is needed in real applications. In response to these issues, a shape reconstruction method based on optical techniques at the micro/nanometer scale is proposed in this paper. Our method provides a mathematical model between optical intensity distribution and depth information, and our contribution can be described as follows: first, the heat diffusion physics equation is analyzed and an optical diffraction model is modified to explain the basic principles of the blurred imaging process resulting from depth variation. Second, a new blurring imaging model is achieved based on curve fitting of a 4th order polynomial curve. The heat diffusion equations are introduced to solve the process of imaging blurring, taking into account global optimization. Finally, a series of experiments are conducted and the results prove that our proposed method can reconstruct 3D shapes at the micro/nanometer scale. 2. Heat Diffusion and Blurring Imaging 2.1. Heat Diffusion in Physics In physics, heat diffusion of most fluids and some homogeneous solid materials like gels is the same in every direction, and it is called isotropic heat diffusion, characterized by a single diffusion coefficient ε. First, assume that the concentration is u and flux is J, according to Fick’s first law, the relationship between the flux and the concentration gradient can be denoted as: Jpx, y, tq “ ε∇upx, y, tq

(1)

Sensors 2016, 16, 302

3 of 17

where y and z are the horizontal and the vertical coordinates of a diffusion source, respectively; the intensity of diffusion is controlled by the diffusion coefficient ε, and it is nonnegative; t is the diffusion time which expends on the process of diffusion caloric; “∇” denotes the gradient operator: „

B ∇“ Bx

B By

T (2)

Then, the continuity equation, which relates the time derivative of the concentration to the divergence of the flux, can be formulated as: Bupx, y, tq “ ∇ ¨ Jpx, y, tq Bt

(3)

B B ` . Bx B y Putting Equations (1) and (3) together, the diffusion equation is denoted as:

where “∇¨” is the divergence operator,

Bupx, y, tq “ ε∇2 upx, y, tq Bt

(4)

Therefore, the isotropic heat diffusion model can be denoted as: $ & Bupx, y, tq “ ε∇2 upx, y, tq Bt % upx, y, 0q “ u0 px, yq

(5)

where u0 (y, z) is the initial condition of diffusion. If the diffusion coefficient varies spatially, the isotropic heat diffusion is transformed into the inhomogeneous heat diffusion, and its model is obtained yielding the following equation: $ & Bupx, y, tq “ ∇ ¨ pεpx, yq∇upx, y, tqq Bt % upx, y, 0q “ u px, yq 0

(6)

The inhomogeneous heat diffusion first appears in physics, and the diffusion intensity of every point in space is controlled by the heat diffusion coefficient. Therefore, as inhomogeneous diffusion activity increases, the diffusion region whose density contrast is lower will become much smoother, while the one whose density contrast is higher will not have a large change. Benefitting from this property, the heat diffusion equation has recently been used in image processing. 2.2. Blurring Imaging Model On geometrical optics, there is an assumption that optical light travels in straight lines, and image blurring results from the variation of camera parameters, such as the focal length. However, in most optical systems, the imaging beam only can travel through a round hole restricted by a diaphragm. If a small object is required to be imaged clearly, we need an optical system of high magnification in theory. When the size of the object, the size of some elements in the optical system, and the wave length of imaging are in the same order of magnitude, optical light is traveling around the hole obviously. That means that the direction of the optical light is changed, and the intensity distribution of an image point is a round light spot, rather than a point as expected [14–16]. This phenomenon is known as optical diffraction. Optical diffraction that appears in a normal optical imaging system is called Fresnel diffraction, and a diagrammatic sketch of the optical path in the optical system is shown in Figure 1, where OXYZ is the coordinate system; X is the optical axis; O is the origin point of the coordinate system, and YZ

Sensors 2016, 16, 302

4 of 17

is the imaging plane. As stated in [17], if we assume P is a random point on the imaging plane, its Sensors 2016, 16, 302 4 of 17 amplitude can be described by an equation:  8 „ n n  x0 xa0 a 22π ρ22  ÿ a a AA rPE“ exp ik ( x ) ( i ) Jq n (Jn p  )ρq     q ¨ p´i E exp ikpx ` P  00 2b   x0x0 R ρ λ b 2 b R   b   n 1

(7) (7)

n “1

where A A the the unit unit amplitude amplitude of of P, P, dd is is the the radius radius of of lens, lens, ρρ is is the the distance distance between between PP and and X X axis; axis; JJnn is is aa where Bessel function of n order; λ is the wavelength of the incident light in the imaging system; k = 2π/λ; R Bessel function of n order; λ is the wavelength of the incident light in the imaging system; k = 2π/λ; is is the distance the R the distancebetween betweenthe thelens lensand andthe theimaging imagingplane; plane;xxisismovement movementof of the the object object plane plane from from the ideal object place along X axis; x 0 is movement of the image plane from the ideal image place along ideal object place along X axis; x0 is movement of the image plane from the ideal image place along X axis. axis. X

Figure 1. The of the optical light light traveling traveling in in an an optical optical system. system. Figure 1. The diagrammatic diagrammatic sketch sketch of the optical

The axial magnification of the camera m is related to the object distance and the imaging The axial magnification of the camera m is related to the object distance and the imaging distance, distance, therefore x = mx0, and we modify the description in [17] with the movement of the object therefore x = mx0 , and we modify the description in [17] with the movement of the object plane x: plane x: „  8 n AA ρ22  ÿ x xd dn 2π d   r EPE“ exp ikpx ` q ¨ ( p´i qJ J(n2p d  ρq (8)  exp ik ( x ) i ) ) (8)  P n   xx  λR R 22R R  nn“ R R ρ 1 1  Based on the pinhole image analysis, the imaging property on the XY plane and the XZ plane are almost almost same. same. Due Duetotothe theproperty propertyofofsymmetry, symmetry,parameter parameter ρ in Equation replaced ρ in Equation (8)(8) cancan be be replaced by by parameter y. Therefore, reduce Equations (9) and (9) startand to research intensity parameter y. Therefore, we we cancan reduce Equation (8) (8) to to Equation start to the research the distribution of a source along the along Y axis:the Y axis: intensity distribution ofpoint a source point # + 1 „ 8  y2  y 2  J p2πλ 1 ysin J1 (2´ y sin u)  ÿ 1 1 n n 1  n uq λ  1 E exp ik ( x ) i J (2  y sin u )( iy sin u ) x        ´ 1 ´ 1 n ´ 1 rP “ exp Pikpx ` q ¨ ´i  x (9)   1 E `  sin22 u ¨  nJn p2πλ ysin uqp´iy sin uq y sin 1 ysin 2R 2 R   πλ´  uu πsin u n  2 n “2

where sin u  dd ; y is the distance between P and Y axis. where sinu “ ;Ry is the distance between P and Y axis. Then, theRnormalized intensity distribution of a random point P can be denoted as: Then, the normalized intensity distribution of a random point P can be denoted as:

  E *  B 2 IP  E P r˚P 2 r IP “ EP ¨ E P “ B

(9)

(10) (10)

n  ˆ J1 (2 1 y sin u )  ˙  ´1 ysin  81 sin u ) x n 1 J (2n 1 y sin u )   (iyř uq  i J1 p2πλ 2 λ 1 ´ 1 sin nuq x n´1 J p2πλ ´ 1 ysin uq  ´i ¨ p´iy `  ´ y sin u  sin u n22 n  where B   . πλ 1 ysin u πsin u n“2 exp(i ) where B “ . exppiβq From Equations (9) and (10), we can see that the intensity distribution of a point on the imaging Equations (9) and we to can see thatthe thepattern intensity a point on the planeFrom varies with x and y. In(10), order analyze of distribution the intensityofdistribution, weimaging fix the plane varies with x and y. In order to analyze the pattern of the intensity distribution, wex fix other parameters, such as λ = 600 nm, sinu = 0.5, and the intensity distribution with different canthe be other parameters, as λ =in600 nm, sinu = 0.5,Figure and the with differentofx Pcan calculated, which such is shown Figure 2. From 2, intensity it can bedistribution seen that the intensity is be calculated, which is shown in Figure 2. From Figure 2, it can be seen that the intensity of P is maximal when the random point P coincides with the original point O, and the maximal value of the maximal the random P coincides with theof original point O, andO, thethe maximal value of intensity when decreases when x point increases. On both sides the original point intensity value the intensity decreases when xofincreases. On both of the originaleven pointthough O, the intensity decreases with the increase its distance fromsides O. Furthermore, x = 0, Ipvalue (x, y) distributes as a blurred round light spot, rather than a point as expected. That means that even when the ideal focus-imaging-condition of the geometrical optics is satisfied, there is still an imaging blurring process.

Sensors 2016, 16, 302

5 of 17

decreases with the increase of its distance from O. Furthermore, even though x = 0, Ip (x, y) distributes as a blurred round light spot, rather than a point as expected. That means that even when the ideal Sensors 2016, 55 of focus-imaging-condition of the geometrical optics is satisfied, there is still an imaging blurring process. Sensors 2016, 16, 16, 302 302 of 17 17 1 1 0.9 0.9

x=0 mm x=0 mm x=0.75e-3 mm x=0.75e-3 mm x=0.50e-3 mm x=0.50e-3 mm x=0.25e-3 mm x=0.25e-3 mm

0.8 0.8 0.7 0.7

IpIp

0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 -1.5 -1.5

-1 -1

-0.5 -0.5

0 0 y (mm) y (mm)

0.5 0.5

1 1

1.5 1.5 -3 x 10-3 x 10

The intensity intensity distribution distribution curves Figure Figure 2. 2. The The intensity distribution curves of of aa random random point point P. P.

From we can can see see that that the the intensity intensity distribution distribution like a a Gaussian Gaussian function From Figure Figure 2, 2, we distribution IIIppp(x, (x, y) y) looks looks like function of y when x is fixed. Therefore, it is reasonable to fit each I p (x, y) with a Gaussian curve, and the fitted whenxxisisfixed. fixed.Therefore, Therefore, it reasonable is reasonable to each fit each y) with a Gaussian and the of yy when it is to fit Ip(x,Iy) with a Gaussian curve,curve, and the fitted p (x, result can be seen in Figure 3, where the theoretical calculation from Equation (10) is denoted with fitted result bein seen in Figure 3, where the theoretical calculation from Equation is denoted result can becan seen Figure 3, where the theoretical calculation from Equation (10) is(10) denoted with the frames, and fitted curves with Gaussian function are with solid withrectangular the rectangular frames, and its fitted a Gaussian function are drawn with the lines. solid the rectangular frames, and its its fitted curvescurves with aawith Gaussian function are drawn drawn with the the solid lines. Then, we can calculate the Gaussian kernel σ from each fitted Gaussian curve, and σ is called as lines. Then, wecalculate can calculate the Gaussian σ from Gaussian curve, is called Then, we can the Gaussian kernelkernel σ from eacheach fittedfitted Gaussian curve, and and σ isσcalled as blurring kernel which can evaluate the concentration of intensity distribution with respect as blurring kernel which canevaluate evaluatethe theconcentration concentrationofofintensity intensitydistribution distribution with with respect respect to blurring kernel which can to different different x. x. 1 1 0.9 0.9 0.8 0.8 0.7 0.7

IpIp

0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 -1.5 -1.5

-1 -1

-0.5 -0.5

0 0 y (mm) y (mm)

0.5 0.5

1 1

1.5 1.5 -3 x 10-3 x 10

Figure 3. The fitted Gaussian curves with different depth variations. Figure Figure 3. 3. The The fitted fitted Gaussian Gaussian curves curves with with different different depth depth variations. variations.

Therefore, Therefore, if if we we know know the the relationship relationship between between the the blurring blurring kernel kernel and and the the depth depth variation variation Therefore, if we know the relationship between the blurring kernel and the depth variation from from curve fitting, it is possible to reconstruct 3D shape of a scene from its blurring image, from curve fitting, it is possible to reconstruct 3D shape of a scene from its blurring image, as as shown shown curve fitting, it is possible to reconstruct 3D shape of a scene from its blurring image, as shown in in in Figure Figure 4. 4. This This relationship relationship can can be be called called as as the the blurring blurring imaging imaging model, model, which which is is constructed constructed with with Figure 4. This relationship can be called as the blurring imaging model, which is constructed with the the the analysis analysis of of the the intensity intensity distribution distribution with with respect respect to to variation variation of of the the object object distance. distance. analysis of the intensity distribution with respect to variation of the object distance.

Sensors 2016, 16, 302 Sensors 2016, 2016, 16, 16, 302 302 Sensors

6 of 17 of 17 17 66 of -4

x 10-4 1.95 x 10 1.95 1.9 1.9 1.85 1.85

σσ

1.8 1.8 1.75 1.75 1.7 1.7 1.65 1.65 1.6 1.6 1.55 1.55 2 2

1.5 1.5

1 1

0.5 0.5

0 0 x (mm) x (mm)

-0.5 -0.5

-1 -1

-1.5 -1.5

-2 -2 -5 x 10-5 x 10

Figure 4. 4. The The burring burring degree degree of of different different depth depth variations. variations. variations. Figure

3. Depth Depth Reconstruction Reconstruction with the the Blurring Imaging Imaging Model 3. 3. Depth Reconstructionwith with theBlurring Blurring ImagingModel Model From the analysis in Section 2, we could find that numerical imaging imaging model model between between xx and and σσ From the the analysis analysisin inSection Section2,2,we wecould couldfind findthat that aa numerical From a numerical imaging model between x and σ is is required required if if we we want want to to calculate calculate the the depth information information from from the the degree of of blurring of of some some is required if we want to calculate the depthdepth information from the degreedegree of blurringblurring of some blurred blurred images, images, and and the the relationship relationship between between σ and and xx in in Figure Figure 44 can can be be fitted fitted with with aa 4th 4th order order blurred images, and the relationship between σ and x inσFigure 4 can be fitted with a 4th order polynomial polynomial curve after the Gaussian fitting. Figure 5 is the relationship between σ and x. polynomial curve after the Gaussian fitting. Figure 5 is the relationship between σ and x. curve after the Gaussian fitting. Figure 5 is the relationship between σ and x. -4

x 10-4 x 10

Calculation result Calculation result Fitting curve Fitting curve

1.9 1.9 1.85 1.85

σσ

1.8 1.8 1.75 1.75 1.7 1.7 1.65 1.65 1.6 1.6 -1.5 -1.5

-1 -1

-0.5 -0.5

0 0 x (mm) x (mm)

0.5 0.5

1 1

1.5 1.5 -5 x 10-5 x 10

Figure 5. The blurring imaging model fitted by a 4th order polynomial curve. Figure 5. 5. The The blurring blurring imaging imaging model model fitted fitted by by aa 4th 4th order order polynomial polynomial curve. curve. Figure

A A 4th 4th order order polynomial polynomial curve curve is is used used to to fit: fit: A 4th order polynomial curve is used to fit: 4

3

2

p1 x 4 ` p2 x 3` p3 x 2` p4 x ` p5 σ “   pp11xx4  pp22 xx3  pp33xx2  pp44 xx  pp55

(11) (11) (11)

The solution process is described as follows: first, normalize the coefficients of the fitting curve in The solution solution process is is described described as as follows: follows: first, first, normalize normalize the the coefficients coefficients of of the the fitting fitting curve curve Equation (11): The process in Equation (11): p2 p3 p pp5 ´ σq in Equation (11): a “ 1, b “ , c“ , d “ 4, e “ (12) p1 p1 p1 p1 p p (p - s ) pp3 = 1, = p22 ,, cc = = p44 ,, ee = = (p55 - s ) 1, bb = (12) = aa = A cubic equation can be obtained to solve=the3 ,,4thddorder equation in Equation (11): (12) pp11

3

pp11

pp11

2

pp11

` qyt the ` my n “ equation 0 A cubic cubic equation equation can can be be obtained obtainedyto to solve the 4tht ` order equation in Equation Equation (11): (11): t solve A 4th order in 2 where q “ ´ c, m “ bd ´ 4e, n “ ´d2 ´33b2 e ` 4ce. 2

+ qy qyt + + my myt + +n n= = 00 yytt + t t

(13)

(13) (13)

Sensors 2016, 16, 302

7 of 17

The solution of Equation (13) is: g g f 2q3 qm f 2q3 qm f f 3 3 ´ `n ´ `n e e c 27 3 3 yt “ ´ ´ z ` ´ 27 `z` 2 2 3

(14)

gˇ fˇ 3 ˇˇ 2 fˇ 2q3 qm q2 fˇ p ´ ` m ˇˇ fˇ 27 ´ 3 ` nq f where: z “ eˇ `p 3 q ˇˇ. 4 3 ˇ ˇ ˇ ˇ Finally, the depth variation can be denoted as: dˆ $ ˙2 ˆ ˙ ’ 1 1 1 ’ ’ ˆ ´p b ` sq ˘ b ` s ´ 4 y ` s ’ t ’ ’ 2 2 2 ’ & x“ dˆ 2 ˙ ˆ ˙ 2 ’ ’ 1 1 1 ’ ’ ´p b ´ sq ˘ b´s ´4 yt ´ sˆ ’ ’ 2 2 2 ’ % x“ 2 dˇ d ˇ ˇ ˇ ˇ1 ˇ1 ˇ ˇ 2 2 ˇ ˇ ˇ where: s “ ˇ b ´ c ` yt ˇ; sˆ “ ˇ yt ´ eˇˇ. 4 4

(15)

The finial depth is equal to: s “ s0 ` x

(16)

From Equation (11), we can find that after the curve fitting it is easy to know p1 to p5 in Equation (11). Therefore, there is only one unknown parameter, the blurring kernel σ, excepted for the desired depth information. In this paper, we will analyze the basic principle of blurring imaging, and then construct the formula of the blurring kernel σ. When a real aperture camera is analyzed, the blurred image E captured on the imaging plane can be approximated with the following equation: Epy, zq “

x

hpy, z, rqIpu, vqdudv

(17)

where r is the radius of the blurring round spot; y and z are the horizontal and the vertical coordinates of an image point, respectively; h is the point spread function (PSF) of the intensity distribution; E(y, z) is the blurring image; I(y, z) is the radiance image. If the scene we consider is made of an Equifocal plane, which is parallel to the image plane, the depth map of the plane and the PSF h, which is shift-invariant, can be denoted as s(y, z) = s and h(y, z, r) = h(y ´ z, r), respectively. Therefore, the image formation model can be denoted by a simple convolution: Epy, zq “ hpy, zq ˚ Ipy, zq (18) where “*” denotes convolution in mathematics. From Figure 2, we can find that for most of the depth variation the PSF can be denoted as a Gaussian function: 1 py ´ y1 q2 ` pz ´ z1 q2 hpy, z, y1 , z1 q “ expp´ q (19) 2πσ2 2σ2 where σ is the concentration parameter of a Gaussian function, and it is called as the Gaussian kernel. σ = ζ r, where ζ is a calibration parameter.

Sensors 2016, 16, 302

8 of 17

Then, the image model in Equation (17) can be formulated with the heat equation in Equation (5): #

.

upy, z, tq “ ε∆upy, z, tq ε P r0, 8q t P p0, 8q upy, z, 0q “ r0 py, zq

(20)

where r0 (x, y) is the radiance image without any blurring, the solution u at a time t is an image I(x, y) = u(x, y, τ), and the certain setting is related to the certain time τ. The variance σ is related to the diffusion coefficient ε with: σ2 “ 2tε (21) When the PSF is shift varying, the equivalence with the isotropic heat equation cannot describe the complicated diffusion property, and the diffusion process can be formulated by the inhomogeneous diffusion equation in Equation (6): #

.

upy, z, tq “ ∇ ¨ pεpy, zq∇upy, z, tq q t P p0, 8q upy, z, 0q “ r0 py, zq

(22)

The relationship between the diffusion coefficient ε and the space-varying variance σ can be denoted as: σ2 py, zq “ 2tεpy, zq (23) Suppose there are two images E1 (y, z) and E2 (y, z) for two different blurring settings σ1 and σ2 , and σ1 < σ2 (that is, E1 (y, z) is less blurring than E2 (y, z), (that is σ1 < σ2 ), from Equation (18), then E2 (y, z) can be denoted: s E2 py, zq “ hpy, z, σ22 qIpu, vqdudv s 1 py ´ uq2 ` pz ´ vq2 “ expp´ qIpu, vqdudv 2πσ22 2σ22 s s 1 1 py ´ uq2 ` pz ´ vq2 pu ´ yrq2 ` pv ´ r z q2 “ expp´ qdudv ¨ expp´ qIpyr, r zqdyrdr z 2πpσ22 ´ σ12 q 2pσ22 ´ σ12 q 2πσ12 2σ12 s py ´ uq2 ` pz ´ vq2 1 expp´ qE1 pu, vqdudv “ 2∆σ2 s 2π∆σ2 2 “ hpy, z, ∆σ qE1 pu, vqdudv “ hpy, z, ∆σ2 q ˚ E1 pu, vq (24) 2 2 2 where ∆σ “ σ2 ´ σ1 , is the relative blurring between the blurred images E1 (y, z) and E2 (y, z). Therefore, a blurred image formulated by another blurred image taking into account relative blurring without the help of the radiance image. Suppose E1 (y, z) with the depth map of s1 (y, z) is the first blurred image captured before a depth variation; E2 (y, z) with the depth s2 (y, z) is another blurred image captured after depth variation; ∆s(y, z) is the depth variation along the optical axis; s0 is the ideal object distance. From the analysis above, we could find that if ∆s is known, it is reasonable to calculate the initial depth s1 (y, z) from the heat diffusion equations. First, the imaging process between two blurred images of the same scene can be denoted Equation (24), and based on it the blurred imaging process between E1 (y, z) and E1 (y, z) can be denoted with the heat diffusion equations: $ . ’ & upy, z, tq “ ∇ ¨ pεpy, zq∇upy, z, tq q t P p0, 8q upy, z, 0q “ E1 py, zq ’ % upy, z, ∆tq “ E py, zq 2

(25)

Sensors 2016, 16, 302

9 of 17

The relative blurring between them is: 2

2

∆σ2 “ σ22 ´ σ12 “ pp1 x2 4 ` p2 x2 3 ` p3 x2 2 ` p4 x2 ` p5 q ´ pp1 x1 4 ` p2 x1 3 ` p3 x1 2 ` p4 x1 ` p5 q

(26)

Then, the time ∆t can be taken as the variable which describes the global amount of blurring, and the diffusion coefficient ε can be denoted as: ε“

∆σ2 2∆t

(27)

Because our method is a global algorithm, an optimization problem is required to solve the diffusion equations globally, and the optimization problem can be transformed as: ż ż 2 r s “ arg min pupy, z, ∆tq ´ E2 py, zqq dydz

(28)

s2 py,zq

However, the optimization process in Equation (28) is ill posed in mathematics, which means, it is difficult to find the minimum solution from Equation (28). Therefore, we regularize the problem with a Tikhonov Penalty: ż ż 2 r s “ arg min pupy, z, ∆tq ´ E2 py, zqq dydz ` α||∇s2 py, zq||2 ` αk||s2 py, zq||2

(29)

s2 py,zq

where the additional term imposes a smoothness constraint on the depth map. In practice, we can define that α ą 0, υ ą 0 and they are too small to introduce additional influence on the cost energy, denoted as: ż ż 2

Dpsq “

pupy, z, ∆tq ´ E2 py, zqq dydz ` α||∇s||2 ` αυ||s||2

(30)

Thus the solution process is transformed into: r s “ arg min Dpsq s

(31)

The flow-process diagram of our algorithm is shown as Figure 6. Our algorithm in this paper can be described by the following six steps: (1) (2) (3) (4) (5)

(6)

Give camera parameters R, ζ, and s0 ; two blurring images E1 , E2 ; a threshold τ; α, v, and the step size β of each step; Initialize the depth map an equifocal plane; Calculate Equation (26) and obtain the relative blurring; Solve the diffusion equations in Equation (25) and attain the solution upy, z, ∆ tq; Based on upy, z, ∆ tq calculate Equation (30). If the cost energy in Equation (30) is less thanε, the algorithm stops, and the finial depth map is obtained; otherwise calculate the following equation with step β: Bs “ ´D1 psq (32) Bt Calculate Equation (27), update the global depth information, and return to Step 3.

Sensors 2016, 16, 302 Sensors 2016, 16, 302

10 of 17 10 of 17

(6) Calculate Sensors 2016, 16, 302Equation

(27), update the global depth information, and return to Step 3. (6) Calculate Equation (27), update the global depth information, and return to Step 3.

10 of 17

Figure 6. 6. The The process process diagram diagram of of our our method. method. Figure Figure 6. The process diagram of our method.

4. Experiment 4. Experiment 4. Experiment We validate validate our algorithm algorithm in this this paper with with a nanogrid, an an AFM cantilever, cantilever, and aa microlens microlens We We validate our our algorithm in in this paper paper with aa nanogrid, nanogrid, an AFM AFM cantilever, and and a microlens among a microlens array. The reason that we choose them as our samples is that their shape has among reason that wewe choose them as our samples is that shapeshape has been among aamicrolens microlensarray. array.The The reason that choose them as our samples is their that their has been structured or controlled precisely at the micro/nanometer In experiment, the first experiment, structured or controlled precisely at the micro/nanometer scale. In scale. the first a nanogrid,a been structured or controlled precisely at the micro/nanometer scale. In the first experiment, a nanogrid, where the step height isand 500 nm and each pitch is is 1 μm, is placed on a Physik Instrumente where the step height is 500 nm each pitch is 1 µm, placed on a Physik Instrumente (PI) nanogrid, where the step height is 500 nm and each pitch is 1 μm, is placed on a Physik Instrumente (PI) nanoplatform, its isshape is reconstructed with our The algorithm. Theofconnection of our nanoplatform, and itsand shape reconstructed with our algorithm. connection our experiment is (PI) nanoplatform, and its shape is reconstructed with our algorithm. The connection of our experiment is shown as Figure 7. shown as Figure 7. as Figure 7. experiment is shown

Figure 7. 7. The The connection connection of of our our experiment. experiment. Figure Figure 7. The connection of our experiment.

Inthe the second experiment, PI nanoplatform is placed uptiptoofthe of cantilever, the AFM In experiment, the the PI nanoplatform is placed tightlytightly up to the thetip AFM In thesecond second experiment, the PI nanoplatform is placed tightly up to the tip of the AFM cantilever, and the raised height of the platform is 100 nm. Then, we use our algorithm to and the raised of the platform is 100 Then, is we100 use nm. our algorithm reconstruct the shape cantilever, andheight the raised height of the nm. platform Then, wetouse our algorithm to reconstruct thecantilever shape when AFM cantilever is Inbent to the expected the height. In the shape third when the AFM is bentthe to the expected height. the third experiment, geometrical reconstruct the shape when the AFM cantilever is bent to the expected height. In the third

Sensors 2016, 16, 302 Sensors 2016, 16, 302

11 of 17

11 of 17

the geometrical shape of Its each microlens is a along hemispheroid. Its average radius alongand the its of experiment, each microlens is a hemispheroid. average radius the vertical direction is 2 µm, Sensors direction 2016, 16, 302is 2 μm, and its average radius along the horizontal direction is 6.5 μm. The 11 of 17 vertical depth average radius along the horizontal direction is 6.5 µm. The depth variation ∆s is controlled by the PI variation Δs is controlled by the PI platform where the microlens is placed. In order to evaluate the platform where the microlens isshape placed. In order to evaluate the reconstructed shape of our algorithm, experiment, the geometrical of each is a hemispheroid. Its average radius along thethe reconstructed shape of our algorithm, we microlens use a Veeco Dimension 3100 AFM system to scan we use a Veeco Dimension 3100 AFM system to scan the microlens before the shape reconstruction. vertical direction is 2 μm, and its average radius along the horizontal direction is 6.5 μm. The depth microlens before the shape reconstruction. The scanning frequency of the AFM system is 1 Hz. In The scanning frequency of the is where 1 Hz. In experiments, microscope thatthe we use variation Δs is controlled byAFM the PIsystem platform thethese microlens is placed. the In order to evaluate these experiments, the microscope that we use is a HIROX-7700 with an amplification factor of 7000. reconstructedwith shape our algorithm, we use a Veeco AFM system scan the are is a HIROX-7700 anofamplification factor of 7000. TheDimension remaining 3100 parameters of thetomicroscope The remaining parameters of the microscope are a focal length f = 0.357 mm, s0 = 3.4 mm, and microlens themm, shape scanning a focal length fbefore = 0.357 s0 reconstruction. = 3.4 mm, andThe ∆s = 100 nm.frequency of the AFM system is 1 Hz. In Δs = 100 nm. these experiments, the microscope that we use is a HIROX-7700 with an amplification factor of 7000.

The remaining parameters of the microscope are a focal length f = 0.357 mm, s0 = 3.4 mm, and 4.1. Nano Grid 4.1. Nano Grid Δs = 100 nm. First, we capture the first blurred image of the nanogrid before rising the PI nanoplatform, and First, we capture the first blurred image of the nanogrid before rising the PI nanoplatform, and then the PI nanoplatform is is raised byby 100 nm, we 4.1.the Nano Grid then PI nanoplatform raised 100 nm, wecapture captureanother anotherblurred blurredimage imageofofthe thegrid. grid.Finally, Finally,we reconstruct the 3D shape of theofnanogrid with these twotwo blurred images, andand thethe results cancan be seen we reconstruct 3D shape the nanogrid blurred images, results First, we the capture the first blurred image with of thethese nanogrid before rising the PI nanoplatform, and be in seen Figures 8–10. Figure 8a is the blurred image before the PI platform is raised and Figure 8b is is that in the Figures 8–10. Figureis8a is the beforeanother the PI blurred platform is raised Figure 8b then PI nanoplatform raised byblurred 100 nm,image we capture image of theand grid. Finally, after rising, and the intensity values in the area of the dotted rectangle are presented with a matrix; that rising, the and3D theshape intensity in the theblurred dotted images, rectangle are weafter reconstruct of thevalues nanogrid witharea theseoftwo and thepresented results canwith be athe reconstructed shape8–10. is shown in8aFigure 9. Figure 10 is aFigure section comparison reconstructed seen in Figure is shown the blurred image the10 PI is between raised andthe Figure 8b isthe matrix; theFigures reconstructed shape is in Figure 9.before isplatform a section comparison between shape andafter the rising, true shape of nanogrid. The unitarea of the depth axis in Figure is Figure mm. 9with that and intensity values in nanogrid. the of the dotted rectangle are9 in presented a reconstructed shape andthe thethe true shape of the The unit of the depth axis is mm. matrix; the reconstructed shape is shown in Figure 9. Figure 10 is a section comparison between the reconstructed shape and the true shape of the nanogrid. The unit of the depth axis in Figure 9 is mm.

Figure 8. The blurred images a nanogrid (the intensity in the rectangles dotted rectangles are Figure 8. The blurred images of a of nanogrid (the intensity valuesvalues in the dotted are presented Figure 8.with The two blurred images(a) of The a nanogrid (the intensity values inthe theplatform; dotted rectangles are presented matrixes). blurred image before raising (b) the blurred with two matrixes). (a) The blurred image before raising the platform; (b) the blurred image after presented with two image after raising the matrixes). platform. (a) The blurred image before raising the platform; (b) the blurred raising the platform. image after raising the platform.

Figure shapeof thegrid. Figure9.9.The The reconstructed reconstructed shape Figure 9. The reconstructed shapeof ofthe thegrid. grid.

Sensors Sensors 2016, 2016, 16, 16, 302 302 Sensors 2016, 16, 302

12 12of of17 17 12 of 17

Figure 10. 10. The The comparison comparison of of an an arbitrary arbitrary section. section. Figure

From Figure 10, we can find that our reconstruction method can reconstruct the shape From Figure 10, we can find that our reconstruction method can reconstruct the shape variation variation precisely compared to the true ground, and the maximal height error of the reconstructed precisely compared to the true ground, and the maximal height error of the reconstructed shape shape appears at the top of the second pitch, which is 0.03 μm. appears at the top of the second pitch, which is 0.03 µm. 4.2. AFM 4.2. AFM Cantilever Cantilever Second, the the AFM AFM cantilever cantilever is is tested. tested. First, First, the the first first blurred blurred image image of of the the AFM AFM cantilever cantilever before before Second, the PI platform is raised is obtained. After the PI nanoplatform rises by 100 nm, we capture another the PI platform is raised is obtained. After the PI nanoplatform rises by 100 nm, we capture another blurred image imageofofthe theAFM AFMcantilever. cantilever. Then, shape of the bent cantilever is reconstructed blurred Then, thethe shape of the bent cantilever is reconstructed withwith our our algorithm, and the results can be seen in Figures 11–13. Figure 11a is the blurred image before algorithm, and the results can be seen in Figures 11–13. Figure 11a is the blurred image before the PI the PI platform rising and 11b Figure 11bblurred is the blurred image afterFigure rising;12Figure is the reconstructed platform rising and Figure is the image after rising; is the 12 reconstructed shape of shape of the AFM cantilever. Figure 13 is a random profile of the bended AFM cantilever. In the AFM cantilever. Figure 13 is a random profile of the bended AFM cantilever. In Figure 13, in order Figure 13, in order to show the depth map clearly, a distance of 3.4 mm is subtracted from the to show the depth map clearly, a distance of 3.4 mm is subtracted from the vertical axis. The unit of the vertical axis. unit depth axis in Figure 13 is mm. depth axis in The Figure 13ofisthe mm.

Figure 11. 11.The Theblurred blurred images of tested the tested tested AFM cantilever (the intensity intensity values in the the dotted Figure images of the AFM AFM cantilever (the intensity values invalues the dotted rectangles 11. The blurred images of the cantilever (the in dotted rectangles are presented with two matrixes). (a) The blurred image before the platform is raised. are presented with two matrixes). (a) The blurred image before the platform is raised; (b) The blurred rectangles are presented with two matrixes). (a) The blurred image before the platform is raised. (b) The Theafter blurred image after raising raising the the platform. platform. image raising the after platform. (b) blurred image

Sensors 2016, 16, 302 Sensors Sensors 2016, 2016, 16, 16, 302 302

13 of 17 13 13 of of 17 17

Figure Figure 12. 12. The 3D shape shape of of the the cantilever. cantilever. The reconstructed reconstructed 3D cantilever. -4

-4 xx 10 10 1.4 1.4

1.2 1.2

Depth(mm) (mm) Depth

11 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 00

50 50

75 75

100 100

125 125

Section length (x10-4 mm) Section length (x10-4 mm)

150 150

175 175

200 200

Figure 13. 13. An arbitrary arbitrary section of of the reconstructed reconstructed AFM cantilever. Figure Figure 13. An An arbitrary section section of the the reconstructed AFM AFM cantilever. cantilever.

From 13, the following conclusion obtained, From Figures Figures 12 12 and and 13 13,the thefollowing followingconclusion conclusionisis isobtained, obtained,

(1) cantilever’s end (1) The highest highest bend bend on on the the AFM AFM cantilever cantilever is end with (1) The The highest bend is at at the the cantilever’s with the the tip, tip, and and the bend bend −4 ´ 4 ´mm 4 mm height between the highest point and lowest point is 97 nm (1.36 × 10 mm − 0.39 == −4 −4 height between the highest point 0.39 ×׈10 10−4 height point and andlowest lowestpoint pointisis9797nm nm(1.36 (1.36ˆ×10 10 mm ´ − 0.39 10 mm −4 ´ 4 0.97 ×× 10 using our method. Therefore, the of bent point is nm. = 0.97 ˆ−410mm) using method. Therefore, the error of highest the highest point 3 nm. 0.97 10 mm)mm) using our our method. Therefore, the error error of the the highest bentbent point is 33 is nm. (2) For the reconstructed shape of the platform, it is plane compared to the bent cantilever, which (2) For the the reconstructed reconstructed shape shape of of the platform, platform, it is plane compared to the bent cantilever, which (2) For which coincides with the experimental fact. coincides with the experimental fact. coincides (3) spike. this (3) On On both sides of the the bent bent tip, tip, we we can can see see distinct distinct troughs troughs next next to to the the spike. spike. The The reason reason for for this this On both both sides sides of (3) result is that in this paper a global optimization method, which requires a secular change until result result is that in this paper a global optimization method, which requires a secular change until an to solve the diffusion equations. Therefore, one of an optimization value isisobtained, obtained, isisused used toto solve thethe diffusion equations. Therefore, oneone of our our an optimization optimizationvalue valueis obtained,is used solve diffusion equations. Therefore, of future tasks will be to modify the solution method during the shape reconstruction process to future taskstasks will will be tobemodify the the solution method during thethe shape reconstruction process to our future to modify solution method during shape reconstruction process solve this problem, and the research direction is to design some adaptive scheme to find an solve thisthis problem, and thethe research an to solve problem, and researchdirection directionisistotodesign designsome someadaptive adaptive scheme scheme to to find an optimal optimal threshold and and an an interactive interactive step step for for different different surfaces. optimal threshold surfaces. 4.3. Microlens Microlens

this experiment, we capture the first blurred image of microlens In this first blurred image of the microlens withwith our our microscope, and this experiment, experiment,we wecapture capturethe the first blurred image of the the microlens with our microscope, microscope, and control the PI platform to rise by 100 nm. Then, we capture the second blurred image and control the PI platform to rise by 100 nm. Then, we capture the second blurred image and reconstruct and control the PI platform to rise by 100 nm. Then, we capture the second blurred image and reconstruct shape of with our algorithm and the traditional SFD without the shape of the withmicrolens our algorithm traditional SFD optical diffraction reconstruct the tested shape microlens of the the tested tested microlens withand ourthe algorithm and thewithout traditional SFD without optical diffraction presented in reference [6], respectively. The experimental results are shown in presented in reference [6], respectively. The experimental results are shown in Figures 14–19. Figure optical diffraction presented in reference [6], respectively. The experimental results are shown 14 in Figures 14–19. Figure 14 is 3D image of the microlens after AFM scanning and Figure 15, which is 3D image of the microlens AFM scanning and Figure 15, which shows thatand the Figure horizontal Figures 14–19. Figure 14 is after 3D image of the microlens after AFM scanning 15, radius which shows that the 6.5 and height is μm, the of the microlens is 6.5 µm radius and itsof height is 2 µm, is profile section AFM scanning. shows that the the horizontal horizontal radius of the microlens microlens is the 6.5 μm μm andofits itsany height is 22from μm, is is the profile profile of of any section from AFM scanning. Figure 16a is the blurred image before the platform is raised and any section from AFM scanning. Figure 16a is the blurred image before the platform is raised and

Sensors Sensors 2016, 2016, 16, 16, 302 302 Sensors 2016, 16, 302 Sensors 2016, 16, 302

14 14of of17 17 14 of 17 14 of 17

Figure 16b is the blurred image after rising it; the reconstructed 3D shapes shown in Figures 17 and Figure the platform is raised and Figure 16b is the blurred image Figure 16a 16bisisthe theblurred blurredimage imagebefore after rising it; the reconstructed 3D shapes shown in Figures 17after and 18 are the results of our new SFDafter andrising the traditional SFD, respectively. Figure 16b isreconstructed the blurred image it;inthe reconstructed 3D shapes shown in Figures 17 and and rising it; the 3D shapes shown Figures 17 and 18 are the results of our new SFD 18 are the results of our new SFD and the traditional SFD, respectively. 18 are the results of our new SFD and the traditional SFD, respectively. the traditional SFD, respectively.

Figure 14. The image of the microlens scanned by AFM. Figure 14. The image of the microlens scanned by AFM. Figure The image image of of the the microlens Figure 14. 14. The microlens scanned scanned by by AFM. AFM.

Figure 15. The profile of a section scanned by AFM. Figure Figure 15. 15. The The profile profile of of aa section section scanned scanned by by AFM. AFM. Figure 15. The profile of a section scanned by AFM.

Figure The blurred blurred images images of of the the microlens microlens (the (the intensity intensity values values in in the the dotted Figure 16. 16. The dotted rectangles rectangles are are Figure 16. The blurred images of the microlens (the intensity values in the dotted rectangles are presented with two matrixes). (a) The The blurred blurred image image before before the the platform is raised; (b) The blurred presented with two matrixes). (a) platform is raised; (b) The blurred Figure 16. The blurred images of the microlens (the intensity values in the dotted rectangles are presented with two matrixes). (a) The blurred image before the platform is raised; (b) The blurred image after rising the platform. image afterwith rising thematrixes). platform. (a) The blurred image before the platform is raised; (b) The blurred presented two image after rising the platform. image after rising the platform.

Sensors 2016, 16, 302 Sensors 2016, 16, 302

15 of 17 15 of 17

Figure 17. 17. The The reconstructed reconstructed shape shape of of the the microlens microlens with with our our method. method. Figure

From Figure 17, it can be seen that the entire shape of the microlens and the planar substrate are reconstructed at the same time. The radius of the hemisphere along the vertical direction is approximately 2 μm, and its horizontal radius is approximately 6.5 μm. In Figure 18 it is difficult to detect the boundary of the microlens from the reconstructed shape of the traditional SFD, and the planar substrate is not reconstructed as expected. Moreover, in order to compare the reconstruction result with AFM scanning, we cut a section of our reconstructed shape randomly and compare it with the result of AFM scanning, which is shown in Figure 19, where we can see that our reconstructed Figure 17. reconstructed shape profile is close to the scanning profile of the AFM. Figure 17.The The reconstructed shapeof ofthe themicrolens microlenswith withour ourmethod. method. From Figure 17, it can be seen that the entire shape of the microlens and the planar substrate are reconstructed at the same time. The radius of the hemisphere along the vertical direction is approximately 2 μm, and its horizontal radius is approximately 6.5 μm. In Figure 18 it is difficult to detect the boundary of the microlens from the reconstructed shape of the traditional SFD, and the planar substrate is not reconstructed as expected. Moreover, in order to compare the reconstruction result with AFM scanning, we cut a section of our reconstructed shape randomly and compare it with the result of AFM scanning, which is shown in Figure 19, where we can see that our reconstructed profile is close to the scanning profile of the AFM.

Figure Figure 18. 18. The The reconstructed reconstructed shape shape of of the the traditional traditionalSFD SFD[6]. [6]. Figure 18. The reconstructed shape of the traditional SFD [6]. -3

Height Height difference difference between between our our method method and and AFM AFM

x 10-3 2.5 x 10 2.5

AFM AFM Our method method Our

2 2

y(mm) y(mm)

1.5 1.5 1 1 0.5 0.5

Figure 18. The reconstructed shape of the traditional SFD [6]. 0 0

-3

x 10 -0.5 -0.5 2.5 0.01 0.01 2

0.015 0.015

Height difference between our method and AFM 0.02 0.025 0.03 0.035 0.02 0.025 0.03 0.035 x(mm) x(mm) AFM

0.04 0.04

0.045 0.045

Our method

Figure 19. The profiles of the microlens with AFM and our method. Figure 19. 19. The The profiles profiles of of the the microlens microlens with withAFM AFMand andour ourmethod. method. Figure y(mm)

1.5

1 be seen that the entire shape of the microlens and the planar substrate From Figure 17, it can are reconstructed at the0.5same time. The radius of the hemisphere along the vertical direction is approximately 2 µm, and its horizontal radius is approximately 6.5 µm. In Figure 18 it is difficult to 0 detect the boundary of the microlens from the reconstructed shape of the traditional SFD, and the planar substrate is not reconstructed as expected. Moreover, in order to compare the reconstruction -0.5 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 result with AFM scanning, we cut a section of our reconstructed shape randomly and compare it with x(mm)

Figure 19. The profiles of the microlens with AFM and our method.

Sensors 2016, 16, 302

16 of 17

the result of AFM scanning, which is shown in Figure 19, where we can see that our reconstructed profile is close to the scanning profile of the AFM. Table 1 is a comparison between AFM scanning, our 3D reconstruction method and the traditional SFD. From Table 1, we can find that compared to AFM scanning, the vertical error of our method is smaller than the horizontal error; the minimum error at the highest point is 3 nm, and the average error of our new algorithm is 53 nm, while the complete running time of our method is only 11% of the AFM scanning time. While for the traditional SFD, the vertical error and the horizontal error are both higher than those of our method in this paper; the minimum error at the highest point is 20 nm; the total working time is close to that of our method. Table 1. Performance of our 3D reconstruction method. Parameter

AFM

Our Method

Traditional SFD

Vertical measurement (mm) Horizontal measurement (mm) Running time (s, 256 ˆ 256 pixels)

2.10 ˆ 10´3 13.5 ˆ 10´3 240

2.103 ˆ 10´3 13.7 ˆ 10´3 27

1.98 ˆ 10´3 14.6 ˆ 10´3 25

5. Conclusions In this paper, a shape reconstruction method with an optical blurred imaging model at the micro/nanometer scale is proposed and tested with a standard grid, an AFM cantilever and a microlens. Our primary contribution here is to modify the blurred imaging model of Fresnel diffraction in optics, and to develop a direct relationship between intensity distribution of a source point and its depth information. Our second contribution is that a new blurred imaging model is proposed based on the curve fitting of a 4th order polynomial curve. In this way, the blurred imaging is combined with the heat diffusion equations which are solved by a dynamic optimization method. Finally, a standard nanogrid, an AFM cantilever and a microlens are tested with our proposed method at the micro/nanometer scale, respectively. The results prove that the proposed algorithm can reconstruct 3D shape with two blurred images, and the minimal reconstruction error is 3 nm. Acknowledgments: The authors thank the funding support from the Natural Science Foundation of China (No. 61305025, 61532007). The authors also thank for the support by the State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences. Author Contributions: Yangjie Wei conceived the research project and proposed the main idea of the paper; Chengdong Wu designed the experiments and analyzed the data; Wenxue Wang prepared the materials and performed the experiments. Yangjie Wei wrote the paper. Chengdong Wu and Wenxue Wang edited and approved the final document. All authors read and approved the final manuscript. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3. 4.

5. 6.

Overbaugh, J.; Bangham, C.R.M. Selection forces and constraints on retroviral sequence variation. Science 2001, 292, 1106–1109. [CrossRef] [PubMed] Huang, C.; Wu, L.Y. State technological development: A case of China’s nanotechnology development. World Dev. 2012, 40, 970–982. [CrossRef] Guthold, M.; Falvo, M.R.; Matthews, W.G.; Paulson, S.; Washburn, S.; Erie, D.A. Controlled manipulation of molecular samples with the nanomanipulator. IEEE ASME Trans. Mechatron. 2000, 5, 189–198. [CrossRef] Rubio-Sierra, F.J.; Burghardt, S.; Kempe, A. Atomotic force microscope based nanomanipulator for mechanical and optical lithography. In Proceedings of the IEEE Conference on Nanotechnology, Munich, Germany, 16–19 August 2004; pp. 468–470. Wu, L.D. Computer Vision; Fudan University Press: Shanghai, China, 1993; pp. 1–89. Wei, Y.J.; Dong, Z.L.; Wu, C.D. Depth measurement using single camera with fixed camera parameters. IET Comput. Vis. 2012, 6, 29–39. [CrossRef]

Sensors 2016, 16, 302

7. 8. 9. 10. 11. 12. 13. 14. 15.

16. 17.

17 of 17

Venema, L.C.; Meunier, V.; Lambin, P.; Dekker, C. Atomic structure of carbon nanotubes from scanning tunneling microscopy. Phys. Rev. B 2000, 61, 2991–2996. [CrossRef] Tian, X.J.; Wang, Y.C.; Xi, N.; Dong, Z.L.; Tong, Z.H. Ordered arrays of liquid-deposited SWCNT and AFM manipulation. Sci. China 2008, 53, 251–256. Yin, C.Y. Determining residual nonlinearity of a high-precision heterodyne interferometer. Opt. Eng. 1999, 38, 1361–1365. [CrossRef] Girod, B.; Scherock, S. Depth from defocus of structured light. In Proceedings of the Optics, Illumination, and Image Sensing for Machine Vision, Philadelphia, Pennsylvania, PA, USA, 8–10 November 1989; pp. 209–215. Favaro, P.; Burger, M.; Osher, S.J. Shape from defocus via diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 518–531. [CrossRef] [PubMed] Favaro, P.; Mennucci, A.; Soatto, S. Observing shape from defocused images. Int. J. Comput. Vis. 2003, 52, 25–43. [CrossRef] Navar, S.K.; Watanabe, M.; Noguchi, M. Real-time focus range sensor. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 1186–1198. Word, R.C.; Fitzgerald, J.P.S.; Konenkamp, R. Direct imaging of optical diffraction in photoemission electron microscopy. Appl. Phys. Lett. 2013, 103. [CrossRef] Kantor, I.; Prakapenka, V.; Kantor, A.; Dera, P.; Kurnosov, A.; Sinogeikin, S.; Dubrovinskia, A.; Dubrovinsky, L. BX90: A new diamond anvil cell design for X-ray diffraction and optical measurements. Rev. Sci. Instrum. 2012, 83. [CrossRef] [PubMed] Oberst, H.; Kouznetsov, D.; Shimizu, K.; Fujita, J.; Shimizu, F. Fresnel diffraction mirror for atomic wave. Phys. Rev. Lett. 2005, 94. [CrossRef] [PubMed] Wang, Z.Q.; Wang, P. Rayleigh criterion and K Strehl criterion. Acta Photonica Sin. 2000, 29, 621–625. © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

Nanometer Scale.

Real-time observation of three-dimensional (3D) information has great significance in nanotechnology. However, normal nanometer scale observation tech...
6MB Sizes 0 Downloads 8 Views