Three-dimensional inline inspection for substrate warpage and ball grid array coplanarity using stereo vision Takeshi Nakazawa* and Ayman Samara Intel Corporation, 5000 W. Chandler Boulevard, Chandler, Arizona 85226, USA *Corresponding author: [email protected] Received 30 January 2014; revised 5 April 2014; accepted 8 April 2014; posted 9 April 2014 (Doc. ID 205491); published 9 May 2014

We present a method for full-field 3D measurement of substrate warpage and ball grid array coplanarity, which is suitable for inline back-end inspection and process monitoring. For evaluating the performance of the proposed system, the linearity between our system and a reference confocal microscope is studied by repeating measurements 35 times with a particular substrate sample (38 mm × 28.5 mm). The pointto-point correlation coefficient with 1σ between two methods is 0.968  0.002, and the 2σ difference is 25.15  0.20 μm for warpage measurement. 1σ repeatability of the substrate warpage is 4.2 μm. For BGA coplanarity inspection the bump level correlation coefficient is 0.957  0.001 and the 2σ difference is 28.79  0.14 μm. 1σ repeatability of BGA coplanarity is 3.7 μm. Data acquisition takes about 0.2 s for full field measurements. © 2014 Optical Society of America OCIS codes: (150.3040) Industrial inspection; (150.5495) Process monitoring and control; (150.0150) Machine vision; (150.6910) Three-dimensional sensing; (110.0110) Imaging systems. http://dx.doi.org/10.1364/AO.53.003101

1. Introduction

In the semiconductor industry, electronic packaging plays an essential role for improving the performance of electronic devices. The goal for the production of a high-performance electronic system is packaging devices as densely as possible in order to minimize circuit path length [1]. For achieving this goal, the trend in integrated circuit (IC) packaging is to increase the input/output (I/O) count and to decrease the size of packaging [2]. The ball grid array (BGA) is the most common packaging technique used in industry because of its high I/O density and shorter electrical paths. Due to high-density packaging, however, process controls for assembly become critical for reducing problems such as connection failures between BGA and a circuit board. Thus it is important to measure IC package surface profile for decreasing device failure. 1559-128X/14/143101-09$15.00/0 © 2014 Optical Society of America

Two important quality metrics for package inspection are the substrate warpage and the BGA coplanarity. Figure 1 shows the schematics of an IC package. Due to thermal cycling during manufacturing process and materials with different expansion rates, a substrate is warped. In order to calculate the BGA coplanarity, the z coordinates of each ball are required, and a regression plane is defined based on these z locations. Coplanarity is defined as the distance between the maximum z and the minimum z from the best-fit plane. The BGA coplanarity directly affects solder joint reliability, and the causes of large coplanarity are substrate warpage and ball height differences. The substrate warpage is typically the major contributor to any lack of coplanarity since the solder ball heights are relatively uniform [3]. Therefore, the substrate warpage is one of the key metrics for the quality control of IC packages. Optical-based profilers have been used as nondestructive measurements for a long time. Common optical inspection tools used in IC package 10 May 2014 / Vol. 53, No. 14 / APPLIED OPTICS

3101

Fig. 1. Schematics of an IC package (concave warpage).

characterization are confocal microscopes, white light interferometers (WLI), laser devices [4,5], fringe projection devices [6], and machine vision techniques [7]. Depending on the purpose of measurements, an appropriate metrology should be employed in order to maximize output performances. For example, confocal microscopes or WLI are widely used in laboratories to characterize sampled IC packages because measurement accuracy is more important than throughput. On the other hand, factories use a machine vision system for large volume inspection due to its high throughput and cost advantage. For a quality-control perspective, the high-speed inspection systems used in factories play a key role to monitor production yield. Thus we are focusing on inline inspection system development used in the factories, rather than those used in the laboratories, to meet the demand for measuring high density BGA packages. Stereo vision is used to reconstruct a 3D object by finding matching pixels (point correspondences) between images captured by two cameras from different view angles and converting these 2D pixel coordinates into the 3D depth. In computer vision, the point correspondence algorithm has been one of the most widely studied subjects [8–10]. For accurate reconstructions, transformation relationships between a camera lens and an image plane as well as between a camera and a scene should be determined. This process is called camera calibration. Tsai [11] and Zhang [12,13] have developed the most commonly used calibration methods in computer vision. Although there are a number of applications

for the 3D measurements [14–20], the studies of the BGA coplanarity, substrate warpage, and bump height measurements using stereo vision are limited [21,22]. In this paper, we propose the inline stereo vision system for BGA coplanarity and substrate warpage inspection. In Section 2, theoretical aspect of stereo vision is discussed. In Section 3, we describe hardware setup and calibration procedure as well as the computer simulation and experimental results for the substrate warpage and the BGA coplanary. Finally the conclusion is given in Section 4. 2. Theory

Figure 2 shows the epipolar geometry [23]. Stereo vision employs two cameras viewing an object from different angles. The world coordinates are given by X w, Y w , and Zw . The camera coordinates are given by x1, y1 and x2 , y2 for a camera 1 and a camera 2, respectively. The points C1 and C2 are the camera center of each camera. The object point A on the world coordinate is imaged to a1 for the camera 1, a2 for the camera 2. The points C1, C2 , and A construct the plane called the epipolar plane. The line connecting the C1 and C2 is called the base line, and its intersection points with each image plane are called epipoles e1 and e2 . The epipolar plane intersects the image planes, whose intersections are called the epipolar line l1 and l2 . We can write the relationship between a1 , a2 , and A as follows. a1  P1 A;

(1)

a2  P2 A;

(2)

where P is known as the 3 × 4 homogeneous camera projection matrix, which maps a point on the world coordinate to a corresponding point on the camera coordinate. Given known point correspondences a and A, the matrix P can be reconstructed by using direct linear transformation (DLT) [24] as, 1 P11 BP C B 12 C BP C B 13 C C 3B P C −x1 B B 14 C BP C −y1 7 7B 21 C C 7 .. 7B B P22 C . 7B P C  0 7B 23 C −xi 5B P C B 24 C C −yi B B P31 C C B B P32 C C B @ P33 A P34 0

2

X1 6 0 6 6 . 6 . 6 . 6 4 Xi 0

3102

Y1 0 .. . Yi 0

Z1 0 .. . Zi 0

1 0 0 X1 .. .. . . 1 0 0 Xi

0 Y1 .. . 0 Yi

0 Z1 .. . 0 Zi

APPLIED OPTICS / Vol. 53, No. 14 / 10 May 2014

0 1 .. . 0 1

−x1 X 1 −y1 X 1 .. . −xi X i −yi X i

−x1 Y 1 −y1 Y 1 .. . −xi Y i −yi Y i

−x1 Z1 −y1 Z1 .. . −xi Zi −yi Zi

(3)

P is pseudoinverse of P. Since PP  I, a point P 1 a1 lies on the ray. This ray is imaged by Camera 2 through Camera center C2 and constructs the line l2 . The line l2 can be written as l2  P2 C1  × P2 P 1 a1 :

(7)

Since, the projection of C1 to Camera 2 is the epipole e2 , Eq. (7) becomes l2  e2  × P2 P 1 a1   Fa1 :

The matrix F is called a fundamental matrix in the machine vision community. Since the point a2 lies on the line l2 , we can write,

Fig. 2. Epipolar geometry.

or simply, Kp  0. This can be solved by singular value decomposition (SVD): K  USV T :

1 x1 P1 31 − P11

6 6 1 1 6 y1 P31 − P21 6 6 1 6 x1 P1 21 − yP11 6 6 2 6 x2 P2 31 − P11 6 6 2 6 y2 P2 31 − P21 4 0 2 x2 P2 21 − y P11

aT2 l2  0:

(9)

From Eqs. (8) and (9), aT2 Fa1  0:

(4)

Then p is the last column of V [25]. Before applying SVD, it is important to perform appropriate normalization to obtain meaningful results [26]. Once the system parameters are determined, object heights can be reconstructed from these P matrices and a set of corresponding points a1 and a2 at each image plane. The simplest approach for height reconstruction is linear triangulation [27]. For each camera, we have a1  P1 A and a2  P2 A, which can also be expressed as a1 × P1 A  0 and a2 × P2 A  0. These equations can be combined as,

2

(8)

(10)

For the point correspondence a1 and a2 , the fundamental matrix satisfies the above condition, and this is called an epipolar constraint. 3. Simulation and Experiment Results A. Hardware Setup

Figure 3 illustrates the system setup. We have two CMOS cameras (4096 × 3072, 25 ftp) with a pixel size of 6 μm × 6 μm. Three diffuse illumination sources are used in this setup. An on-axis light source is

1 x1 P1 32 − P12

1 x1 P1 33 − P13

1 y1 P1 32 − P22

1 y1 P1 33 − P23

1 x1 P1 22 − y1 P12

1 x1 P1 23 − y1 P13

2 x2 P2 32 − P12

2 x2 P2 33 − P13

2 y2 P2 32 − P22

2 y2 P2 33 − P23

0 2 x2 P2 22 − y P12

2 x2 P2 23 − y2 P13

1 x1 P1 34 − P14

3

7 1 1 70 y1 P1 − P 34 24 7 7 X C 1 7B BY C x1 P1 24 − y1 P14 7 7B C  0; 2 7B Z C @ A x2 P2 34 − P14 7 7 7 2 1 y2 P2 34 − P24 7 5 2 x2 P2 24 − y2 P14

(5)

where Pn ij denotes each element of P1 or P2 matrix. Similarly, this equation can be solved by SVD. Now consider a ray that is back-projected from point a1 to the 3D scene (A″–A′–A) in Fig. 2. Given a point a1 at the image plane, we want to find a set of points that construct a ray passing through the Camera center C1. To construct a ray in space, we need two points. One is the Camera center C1, and the other point can be obtained from Eq. (1) as,

located above the IC package and used as a masking purpose when image processing is performed. A centroid of the reflected light is used to locate x and y coordinates for each bump. Two light sources, Light 1 and Light 2, are used to obtain good contrast images. The angle and height of these two light sources need to be adjusted in order to obtain an optimal image contrast.

A  P 1 a1 :

System calibration is carried out in order to determine P matrices for both cameras. We use a

(6)

B. System Calibration

10 May 2014 / Vol. 53, No. 14 / APPLIED OPTICS

3103

Fig. 3. System setup.

calibration board, which has uniformly distanced cross targets. Two image distortions should be corrected: one is perspective distortion and the other is radial distortion. In order to calculate transformation matrix, or homography, we use four crosses at each corner and its corresponding ideal points. From these pairs, perspective distortion is corrected. The next image correction is a radial distortion. Again, a set of measured points and ideal points should be determined. Figure 4(a) shows these sets. We assume that the image center or the principal point is near the center of the image and radial distortion is really small around the center. With this assumption, the ideal locations (green circles) are calculated from the unit square near the image center. The red dots are the centroid of each cross. Figure 4(b) is the image after the radial distortion is corrected. As indicated in the image, the red dots and the green circles are aligned after the transformation is applied to the image. Once image aberrations are corrected, the next step is to calculate the P matrix for each camera. The calibration target is used again to obtain sets of a and A. First, the target is positioned at a nominal height z0 , and a single image is taken by each camera. Then the stage is moved to the next position z1 , and another image is taken. Repeat the process to obtain a sufficient number of sets, a and A. Given these correspondences, the P matrix can be calculated. C.

The image acquisition procedures are as follows. (1) First, Lights 1 and 2 are turned off and an image is captured by Cameras 1 and 2 with the on-axis light.

Measurements

Figure 5 shows the two camera centers and the world coordinate. From the calculated P matrices, x; y; z can be determined. The calculated values are x1 ; y1 ; z1   −0.63; −64.9; 191.2 and x2 ; y2 ; z2   0.12; 68.3; 188.3 in millimeters. At this camera location, the image field of view is 38 mm × 28.5 mm. 3104

Fig. 4. (a) Image with radial distortion. Red dots show centroids of crosses; green circles are locations of ideal grids. (b) Image after radial distortion is corrected. Red dots show centroids of crosses; green circles are locations of ideal grids.

APPLIED OPTICS / Vol. 53, No. 14 / 10 May 2014

Fig. 5. Camera center and the world coordinates.

Fig. 6. Measurement procedures.

images. For this purpose, the on-axis light is used. Figure 7 shows the BGA side of IC package sample (top), images using Light 1 (bottom left), and the on-axis light (bottom right). The image captured by Light 1 shows brighter background reflection from the substrate surface as compared with the image captured by the on-axis light. If there is background reflection that has similar intensity values when compared to the bumps, each ball cannot be isolated properly. This is why the image with the on-axis light is needed for bump masking. Figure 8 shows the masked image. The BGA image with the on-axis light is used to make the mask and is then applied to the image captured with Lights 1 and 2. Figure 9 is the masked image with bump numbers. The top image is from Camera 1 and the bottom is from Camera 2. Because the cameras look at the object from different angles, labels between the two images do not match each other and, thus, reordering process is necessary in order to have the same labeling between the two images. D.

(2) Turn off the on-axis light and turn on Light 1. Capture an image by Camera 2. (3) Turn off Light 1 and turn on Light 2. Take an image by Camera 1. Figure 6 illustrates height reconstruction procedures that can be classified into three steps. The first step is to identify a corresponding bump between two images. Once bump pairs are determined, the next step is the warpage measurement. Last, the BGA coplanarity measurement is performed. In order to reconstruct Z coordinates, point correspondences should be identified. The first step is to determine corresponding bump pairs between two

Substrate Warpage Measurement

Once the corresponding bump pairs between the two images are determined, a substrate warpage measurement can be performed. Since there are no specific features or texture on the substrate that can be used for locating point correspondences, the ball edge is used for obtaining these pairs. At first, a fundamental matrix F is calculated using point correspondences obtained from the edge of each bump as illustrated in Fig. 10. A y coordinate of each edge is determined as the position, where each ball has the maximum diameter. An x coordinate is defined from the intensity profile of this y cross section by using an intensity threshold.

Fig. 7. BGA side of IC package sample (top), images captured with Light 1 (bottom left), and on-axis light (bottom right). 10 May 2014 / Vol. 53, No. 14 / APPLIED OPTICS

3105

Fig. 8. Masked image. Image is captured with Light 1, while the mask is based on the on-axis light image.

Fig. 9. Masked image with bump numbers. Camera 1 (top) and Camera 2 (bottom).

Fig. 11. Images of the Camera 1: the red dot is the point defined from the ball edge (top); Camera 2: the red line shows the epipolar line calculated from F matrix, and the green dot indicates the corresponding point (bottom).

Fig. 10. Bump image with the edge locations shown in the red dots.

Once the fundamental matrix F is obtained, point correspondences on the substrate can be calculated. Figure 11 illustrates how to determine these pairs. At first, a reference point shown as the red dot is chosen from Camera 1 image (top). The coordinates of this reference point are defined from the edge locations previously determined, as a result the y coordinate of the red dot and green arrow (max diameter) are identical. The x coordinate of the red dot is defined as 8 pixels away from the edge of the ball in this case. Once the point on Camera 1 is defined, we can calculate an epipolar line by using Eq. (8). We know that a corresponding point should be somewhere along this line. To identify this point, again, the y coordinate is chosen from the maximum ball diameter position in the Camera 2 image, and the green dot is the corresponding point. Since the reference points can be defined at each side of the ball, we have two reference points for each ball. 3106

APPLIED OPTICS / Vol. 53, No. 14 / 10 May 2014

Once point correspondences are defined, we can calculate the Z coordinates. First should be noted that the disparity of the substrate changes slowly almost everywhere, or in other words, the substrate surface should be smooth. Thus, to calculate the Z coordinate of a single point, we take an average of nearest four points around it. We define the substrate warpage as follows: Warpage 

1 H 1  H 2      H 5  5  − L1  L2      L5  :

(11)

H n indicates the five largest Z values, and Ln is the five smallest Z values from measurements. Figure 12 illustrates the 3D profiles of the IC package and clearly shows the warped shape. The dimension of this sample is 38 mm × 28.5 mm. Each dot indicates the Z coordinate of the sampled points. The color plane shows the regression plane based on the Z coordinates. For evaluating our results, we use a confocal microscope as our reference. The measurements are repeated 35 times consecutively. The mean substrate warpage with 1σ is 226.8  4.2 μm based on our system and 215.2 μm based on our reference confocal

Fig. 12. 3D warpage scatter plot with regression plane.

microscope. The measurement bias is about 11 μm for this IC package. Another metric for evaluating the system performance is the linearity between our system and the reference. One of the parameters for measuring linearity is a correlation coefficient that is defined as follows: ρ

covX; Y ; σX σY

(12)

where cov is the covariance and σ is the standard deviation. X is the set of data from our system and Y is that of the reference tool. Figure 13 is the point-to-point correlation plot between the two systems. The blue centerline shows the regression line with a correlation coefficient with 1σ of 0.968  0.002. The two black lines illustrate the 2σ upper and lower limits with 25.15  0.20 μm.

Fig. 13. Correlation between our system and the reference confocal tool.

E. BGA Coplanarity Measurement

In order to determine the BGA coplanarity bump heights should be calculated. We use a 3D bump model to estimate bump heights. It is modeled as a hemi-ellipsoid as shown in Fig. 14. A single ball area is defined as 100 pixels × 100 pixels, which is the same as the real image size captured by the cameras. From the P matrices obtained from the experiment and X, Y, Z coordinates of the model, we can calculate expected 2D captured images shown in Fig. 15. The two green circles indicate the edge of the bump, which is determined by the same method used in the warpage measurement, and a straight line defines the diameter in pixels. From this model the relationship between the ball height and the diameter in pixels can be obtained, which is shown in Fig. 16. From the two edge locations determined for the warpage measurements, we can obtain the diameter of each ball and convert it to a ball height using this relationship. To reconstruct the BGA coplanarity

Fig. 14. Simulated bump model. 10 May 2014 / Vol. 53, No. 14 / APPLIED OPTICS

3107

Fig. 15. Simulated image from the P matrices obtained from the experiment.

Fig. 18. Correlation between our system and the reference confocal tool.

Fig. 16. Relationship between the ball height and the ball diameter in pixels.

distribution, the calculated ball heights are added to the Z coordinates of the substrate warpage. The results are shown in Figs. 17 and 18.

The mean BGA coplanarity with 1σ is 259.7  3.7 μm based on our system and 222.8 μm based on our reference confocal microscope. The correlation coefficient with 1σ is 0.957  0.001 and 2σ is 28.79  0.14 μm. Since BGA ball heights are estimated based on the model, the system gives measurement outliers if the shape of a ball deviates from the model due to process issues. Thus both correlation coefficient and 2σ for BGA coplanarity are worse than these two parameters for warpage measurement. Yet the proposed method gives approximately the same standard deviation as the substrate warpage measurement. For obtaining two measurement results (Figs. 12 and 17) from a series of raw images, the execution time with 1σ by a laptop (Intel Core i7 2.4 GHz, 8 GB of memory) in a MATLAB environment is 69.2  1.0 s. We have validated that the proposed method works for IC package sample with concave warpage by measuring 30 different samples.

Fig. 17. 3D coplanarity scatter plot. 3108

APPLIED OPTICS / Vol. 53, No. 14 / 10 May 2014

5. 6. 7. 8. 9. Fig. 19. Bump image with different illumination conditions.

To evaluate the effect of BGA surface reflectivity for height reconstruction, two different illumination conditions are compared. Figure 19 shows the identical bump with nominal intensity (left) and brighter illumination (right) to create intensity saturation at the top of the BGA ball. The mean pixel diameter difference ((R1-L1)-(R2-L2)) with 1σ between two illumination conditions is 0.14  0.69 pixels among randomly chosen 35 different bumps. From Fig. 16, 0.8 pixel diameter corresponds to 4 μm height. 4. Conclusion

We have demonstrated a method for substrate warpage and BGA coplanarity inspection using the stereo-vision system. This system allows fast fullfield measurements, which is suitable for inline backend inspection and process monitoring. For evaluating the performance of our system, the particular IC sample is measured 35 times and compared with the reference confocal microscope. The mean substrate warpage with 1σ is 226.8  4.2 μm based on our system and 215.2 μm based on our reference confocal microscope. The measurement bias is about 11 μm for this IC package. The correlation coefficient is 0.968  0.002 and the 2σ difference in the two methods is 25.15  0.20 μm for the warpage measurement. The mean BGA coplanarity is 259.7  3.7 μm based on our system and 222.8 μm based on our reference confocal microscope. The bump level correlation coefficient for BGA coplanarity is 0.957  0.001 and the 2σ difference is 28.79  0.14 μm. A data acquisition takes about 0.2 s for the full field measurements. The authors gratefully acknowledge the support of Intel Corporation.

10. 11.

12. 13. 14. 15. 16. 17. 18. 19.

20.

21. 22.

23. 24.

References 1. W. D. Brown, Electronic Packaging (IEEE, 2006). 2. W. J. Greig, Integrated Circuit Packaging, Assembly and Interconnections (Springer, 2007). 3. Texas Instruments, “Flip chip ball grid array package reference guide” (2005), http://www.ti.com/lit/ug/spru811a/spru811a.pdf. 4. H. Tsukahara, Y. Nishiyama, F. Takahashi, and T. Fuse, “High-speed solder bump inspection system using a laser

25. 26. 27.

scanner and CCD Camera,” Systems and Computers in Japan 31, 94–102 (2000). P. Kim and S. Rhee, “Three-dimensional inspection of ball grid array using laser vision system,” IEEE Trans. Electron. Packag. Manufact. 22, 151–155 (1999). H. N. Yen and D. M. Tsai, “A fast full-field 3D measurement system for BGA coplanarity inspection,” Int. J. Adv. Manuf. Technol. 24, 132–139 (2004). V. Bartulovic, M. Lucic, and G. Zacek, “Inspection of ball grid arrays (BGA) by using shadow images of the solder balls,” U.S. Patent 6,177,682 B1 (23 January 2001). D. Marr and T. Poggio, “Cooperative computation of stereo disparity,” Science 194, 283–287 (1976). U. R. Dhond and J. K. Aggarwal, “Structure from stereo—a review,” IEEE Trans. Syst. Man Cybern. 19, 1489–1510 (1989). M. Z. Brown, D. Burschka, and G. D. Hager, “Advances in computational stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 993–1008 (2003). R. Y. Tsai, “A versatile camera calibration technique for highaccuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3, 323–344 (1987). Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proc. 7th Int. Conference on Computer Vision (IEEE, 1999), pp. 666–673. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000). P. Luo, Y. Chao, and M. Sutton, “Application of stereo vision to three-dimensional deformation analyses in fracture experiments,” Opt. Eng. 33, 981–990 (1994). J. J. Aguilar, F. Torres, and M. A. Lope, “Stereo vision for 3D measurement: accuracy analysis, calibration and industrial applications,” Measurements 18, 193–200 (1996). C. J. Tay, X. Kang, C. Quan, X. Y. He, and H. M. Shang, “Height measurement of microchip connecting pins by use of stereovision,” Appl. Opt. 42, 3827–3831 (2003). Y. J. Xiao and Y. F. Li, “Optimized stereo reconstruction of freeform space curves based on a nonuniform rational B-spline model,” J. Opt. Soc. Am. A 22, 1746–1762 (2005). Z. Ren and L. Cai, “Three-dimensional structure measurement of diamond crowns based on stereo vision,” Appl. Opt. 48, 5917–5932 (2009). Z. Ren, J. Liao, and L. Cai, “Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision,” Appl. Opt. 49, 1789–1801 (2010). Z.-Z. Tang, J. Liang, Z. Xial, C. Guo, and G. Hu, “Threedimensional digital image correlation system for deformation measurement in experimental mechanics,” Opt. Eng. 49, 103601 (2010). C. J. Tay, X. He, X. Kang, C. Quan, and H. M. Shang, “Coplanarity study on ball grid array packaging,” Opt. Eng. 40, 1608–1612 (2001). M. Dong, R. Chung, E. Y. Lam, and K. S. M. Fung, “Height inspection of wafer bumps without explicit 3-D reconstruction,” IEEE Trans. Electron. Packag. Manufact. 33, 112–121 (2010). C. Steger, Handbook of Machine Vision (Wiley-VCG, 2006). J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proc. Computer Vis. Patt. Recog. 1106–1112 (1997). K. F. Riley, M. P. Hobson, and S. J. Bence, “Matrices and vector spaces,” in Mathematical Methods for Physics and Engineering (Cambridge University, 2002). R. Hartley, “In defense of the eight-point algorithm,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 580–593 (1997). R. Hartley, “Triangulation,” Comput. Vis. Image Underst. 68, 146–157 (1997).

10 May 2014 / Vol. 53, No. 14 / APPLIED OPTICS

3109

Three-dimensional inline inspection for substrate warpage and ball grid array coplanarity using stereo vision.

We present a method for full-field 3D measurement of substrate warpage and ball grid array coplanarity, which is suitable for inline back-end inspecti...
2MB Sizes 3 Downloads 3 Views