Robust linear equation dwell time model compatible with large scale discrete surface error matrix Zhichao Dong,1 Haobo Cheng,1,* and Hon-Yuen Tam2 1

2

School of Optoelectronics, Beijing Institute of Technology, 5 South Zhongguancun Street, Haidian District, Beijing 100081, China

Department of Mechanical and Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon Tong, Hong Kong 999077, China *Corresponding author: [email protected] Received 17 December 2014; revised 10 February 2015; accepted 11 February 2015; posted 11 February 2015 (Doc. ID 230777); published 25 March 2015

The linear equation dwell time model can translate the 2D convolution process of material removal during subaperture polishing into a more intuitional expression, and may provide relatively fast and reliable results. However, the accurate solution of this ill-posed equation is not so easy, and its practicability for a large scale surface error matrix is still limited. This study first solves this ill-posed equation by Tikhonov regularization and the least square QR decomposition (LSQR) method, and automatically determines an optional interval and a typical value for the damped factor of regularization, which are dependent on the peak removal rate of tool influence functions. Then, a constrained LSQR method is presented to increase the robustness of the damped factor, which can provide more consistent dwell time maps than traditional LSQR. Finally, a matrix segmentation and stitching method is used to cope with large scale surface error matrices. Using these proposed methods, the linear equation model becomes more reliable and efficient in practical engineering. © 2015 Optical Society of America OCIS codes: (220.0220) Optical design and fabrication; (220.4610) Optical fabrication; (220.5450) Polishing. http://dx.doi.org/10.1364/AO.54.002747

1. Introduction

Dwell time optimization is known as a significant procedure for the deterministic subaperture polishing process since it was used in engineering [1–5]. The principle of deterministic subaperture polishing indicates that the material removal is a 2D convolution process of dwell time and tool influence function (TIF) along a tool path [1]. Target removal maps and TIFs can be determined previously by an interferometric test; then the solution of dwell time becomes a deconvolution problem or an inverse problem. Researches have proposed many dwell time algorithms [6–9], such as the convolution iteration method presented by Jones [6], the Fourier transform 1559-128X/15/102747-10$15.00/0 © 2015 Optical Society of America

method presented by Wilson and McNeil [7], and the linear equation model presented by Charles et al. [8], as well as a numerical method presented by the authors in [9]. The features of them have been analyzed in the authors’ previous work [9]. The linear equation model takes a TIF and tool path into a coefficient matrix A (with rows equal to the number of total points of the surface error matrix, and columns equal to the number of tool paths). Then, the dwell time is calculated by solving a linear equation of Ax  b, which is mostly an ill-posed equation because A is large scaled, sparse, and ill-posed (i.e., with a very large conditional number, e.g., >104 ). To obtain a stable numerical solution for it, the Tikhonov regularization [10,11], or truncated singular value decomposition (SVD) [12] methods can be used. Based on the Tikhonov regularization method with a damped factor, Wu et al. [11] 1 April 2015 / Vol. 54, No. 10 / APPLIED OPTICS

2747

introduced an extra removal amount and a tool path weight factor, as well as a surface error weight factor, to expand the freedom of solution. For a 50 × 50 surface error matrix, their method obtained high convergence rates through a least square QR decomposition (LSQR) method. However, there are still some obstacles when applying Wu et al.’s method [11] in engineering:

polishing tools can be simulated by their pressure and geometric motion models [13]. The material removal of the subaperture polishing process (Z) can be expressed as the convolution of dwell time T and TIF (R) as Eq. (2), where ⊗ denotes a 2D convolution. The solution of dwell time is a deconvolution process in 2D space scale, which remains difficult and still unresolved:

1. The damped factor, the extra removal amount, and two weighted factors remain unfixed for different surface error maps or tool paths; thus we may need multiple retrials to determine these factors, which could be complex and a little troublesome. 2. The extra removal amount (e.g., half wavelength in Ref. [11]) would lengthen the processing time significantly, especially when the magnitude of surface error is relatively low. 3. It is nearly impossible to cope with a large scale surface error matrix (which is commonly available on many commercial laser interferometers, e.g., 512 × 512 or 1024 × 1024) on a PC, because the scale of coefficient matrix A is too large (e.g., with scales of ∼105 × 103 to ∼106 × 104 ).

Zx; y  Tx; y ⊗ Rx; y:

This study concentrates on improving the stability and practicality of the linear equation model, with only one damped factor for Tikhonov regularization. And an optional interval and a typical value of the damped factor are determined automatically for the traditional LSQR method, which are both dependent on the peak removal rate (PRR) of TIF. Thus, it is not necessary to perform multiple retrials. Then, a constrained LSQR method is proposed to improve the robustness of damped factors, which can provide consistent dwell time maps with various damped factors, and can also overcome the adverse influence of small damped factors when employing the traditional LSQR method. Finally, we adopt a distributed model (i.e., matrix segmentation and stitching) to cope with large scale surface error matrices. This study can provide convenience in applying the linear equation model in practical engineering. 2. Backgrounds

The material removal of subaperture polishing is considerably different from other machining processes (e.g., diamond turning or grinding) [9]. It is dominated by TIFs of polishing tools rather than purely geometric motion of cutting tools. The Preston model is widely used for predicting the material removal of subaperture polishing, as in Eq. (1): dzx; y  K · Px; y · Vx; y;

(1)

where dz denotes the material removal in unit time, and K is a coefficient related to the material of workpieces and polishing tools; the type, size, and concentration of abrasives; and the pH of polishing slurry, etc. P and V represent the contacting pressure and relative velocity of a workpiece and polishing tool, respectively. Using the Preston model, the TIFs of 2748

APPLIED OPTICS / Vol. 54, No. 10 / 1 April 2015

(2)

Further, the edge effect is also a troublesome problem in engineering [9,11,14–16], and it should be suppressed in numerical simulations, which requires the support of data extension of surface errors. There are many methods for data extension, such as the neighbor-average and Gerchberg extrapolation [11,17], which were analyzed in the authors’ previous work [9]. 3. Principle of Linear Equation Model A. Basic Principle

For a surface error matrix, ZM×N (M, N are the numbers of row and column, respectively), the total number of removal points is Pr  M · N:

(3) ⇀



If a tool path has column vectors of X and Y , with dwell point number of Pt, and is generally uniformly distributed on the workpiece’s surface, the removal rate of the uth point Pxu ; yu  when the tool’s center is dwelling at the vth point Pxv ; yv  can be ruv  Rxv − xu ; yv − yu :

(4)

Then, the convolution model [see Eq. (2)] can be translated as a linear function model        

r11 r21 .. . rPr 1

r12 r22 .. . rPr 2

… … .. . …

r1Pt  t1   z1  r2Pt  t2   z2  ..  ..   .. ; .  .   .     z  rPr Pt tPt Pr



(5)



where t  t1 ; t1 ; …; tPt T is the dwell time. z  z1 ; z1 ; …; zPr T is the vectorization of the surface error matrix ZM×N . Then, Eq. (5) can be simplified as Ax  b:

(6)

If we specify the scale of the three matrices in Eq. (6), it is APr ×Pt · xPt ×1  bPr ×1 :

(7)

A rare situation in practical engineering is Pr  Pt ;

(8)

which indicates that the coefficient matrix A is a square matrix. Then, if the rank of A equals Pt , Eq. (6) has a unique and accurate solution. However, considering current measurement techniques (e.g., laser interferometers) with large scale CCD arrays, mostly we have Pt < Pr . According to the construction process of matrix A, it is mostly a singular or approximately singular matrix with a large condition number (e.g., >104 ), which implies that the columns of A are nearly linearly dependent, and the numerical solution of Eq. (6) is quite unstable. It is susceptible to small measurement errors or random noise at the right matrix b. Thus, Eq. (6) is mostly an ill-posed (or ill-conditioned) problem, and it is not advisable to find the accurate solution in engineering [11]. For a square surface error matrix (original scale 50 × 50 and extended to be 70 × 70) figured by a TIF (10 × 10) along a grid-shape path (4900 × 1), the shape of matrix A (4900 × 4900, normalized) in Eq. (6) is shown in Fig. 1. It has a large fraction of zero values, which illustrates that A is a large scale sparse matrix. Furthermore, A has a condition number of ∼1.6 × 106. Thus, Eq. (6) is a highly ill-posed problem, with large scale and sparse coefficient matrices, simultaneously. B.

rank-deficient least squares problem, in which the trivial changes of matrices A and b would not have significant effects on the solution. C.

LSQR method

The coefficient matrix A in Eq. (9) is still a large scale and sparse matrix, and SVD is not a good method due to its relatively long computing time. One efficient method to solve this problem is the LSQR method that was developed in 1982 by Paige and Saunders [18]. LSQR is similar in style to the well-known conjugate gradients (CG) method, but possesses more favorable numerical properties [18]. It first changes the coefficient matrix A to be a square matrix, and then solves the least square solution by virtue of the Lanczos method that minimizes the 2-norm of b − Ax [Eq. (10)]. Matrix A does not need to be square, but it should be large and sparse. Normally, an iterative process is used, during which LSQR generates a sequence of approximations xk such that the residual 2-norm of b − Ax decreases monotonically. The LSQR algorithm has been validated to be quite reliable in various circumstances [18]: minimize ‖b − Ax‖2 :

(10)

Tikhonov Regularization

To solve an ill-posed problem, the Tikhonov regularization method is adopted in this work [11]. An eye matrix multiplied by a damped factor w is introduced to Eq. (5), and then it is derived as r  11 r  21  .  ..  r  Pr 1  w   0   ..  .  0

r12 r22 .. . rPr 2 0 w .. . 0

… … .. . … … … .. . …

z  r1Pt   1 z  r2Pt   2 ..    ..  .  t1   .  rPr Pt  t2   zPr ; 0  …   0   0  0  tPt     ..   .  0    0 w

(9)

where coefficient matrix A was extended to Pr  Pt  × Pt , and the material removal vector was extended to Pr  Pt  × 1. The column vectors of matrix A and material removal vector are not correlated with each other [11]. Then Eq. (9) is a completely

Fig. 1. Shape of coefficient matrix A (note that the maximal value of A generally equals the PRR of the TIF).

D.

Necessity of Tikhonov Regularization

For a 50 × 50 surface error matrix (assume it is correlated with a 50 × 50 mm workpiece), PV  1.206λ and RMS  0.274λ (λ  632.8 nm), as shown in Fig. 2(a), and a 10 × 10 TIF matrix (Gaussian shape; assume the TIF size is 10 × 10 mm, with PRR  1.0 λ∕ min and full width at half maximum FWHM  5.0 mm), two simulations are conducted here to investigate the influence of Tikhonov regularization on the convergence rate of surface error and the distribution of dwell time. After Tikhonov regularization with a damped factor w  5, as in Eq. (9), the LSQR method obtains a smooth dwell time map [see Fig. 2(b)] with high similarity to the distribution of surface error map, and very little negative values (negative dwell time ratio 0.56%), which means the result is quite reliable in practical engineering. The dwell time map correlated with the extended surface error matrix is given in Fig. 2(c), and Fig. 2(d) shows the residual errors correlated with original surface error, PV  0.088λ and RMS  0.0121λ. The convergence rate of the RMS is 95.58%. However, if the coefficient matrix A is not regularized, as in Eq. (5), the dwell time map and residual error calculated by the LSQR method are shown in Figs. 2(e) and 2(f), respectively. The shape of the dwell time map [Fig. 2(e)] is completely different from the surface error map [Fig. 2(a)], and a lot of negative values emerge (negative dwell time ratio 30.96%). Figure 2(f) shows the related residual error, with PV  0.029λ and RMS  0.005λ. Even though the convergence rate is higher than that with regularization, it is clearly inappropriate for practical 1 April 2015 / Vol. 54, No. 10 / APPLIED OPTICS

2749

The damped factor is set as a series of values from 0.01 to 10, and the other conditions are the same as in Section 3.D. The two curves about the negative dwell time ratio and the RMS of residual error shown in Fig. 3(a) indicate the following. 1. A small damped factor would induce lots of negative dwell time values that are unreal in practical engineering. 2. As the damped factor increases from 0 to ∼1 or 2, the negative dwell time ratio declines quickly to 4, but the increasing trend is much slower than that of the traditional LSQR method.

Fig. 7. RMS convergence curves for original and extended surface error maps during iterations of traditional LSQR method, with various damped factors.

3. Constrained LSQR largely reduces the calculating time because of the reduction of iteration times. 4. For constrained LSQR, the optional range for the damped factor is much wider and more robust than in traditional LSQR. We can select a fixed value as shown in Eq. (14), and do not need multiple retrials for an optimized damped factor: wfixed  1.0 × PRR:

(14)

Also, employing the constrained LSQR, the dwell time maps using various damped factors are quite consistent with the shape of the original surface error map, which can be found in Figs. 8(a)–8(j), correlating with damped factors from 0.01 to 9.0. For a complex surface error map shown in Fig. 9(a) [same as Fig. 6(a)], and with the same conditions for the simulation in Fig. 6, the constrained LSQR method with a damped factor of Eq. (14) is also validated to be efficient. Figure 9(b) shows the dwell

Fig. 8. Dwell time maps calculated by constrained LSQR method with different damped factors from 0.01 to 10. 1 April 2015 / Vol. 54, No. 10 / APPLIED OPTICS

2753

Fig. 9. (a) Surface error map, PV  1.2593λ and RMS  0.2151λ. (b) Dwell time map in the original surface error map. (c) Dwell time map in the extended surface error map. (d) Residual error, PV  0.0244λ and RMS  0.0057λ.

time map in the original surface error map, which has a distribution consistent with the shape of the original surface error map. The ratio of negative dwell time is just 0.18%. Figure 9(c) shows the dwell time map in the extended surface error. Figure 9(d) shows the residual error, with PV  0.0244λ and RMS  0.0057λ. The RMS convergence rate is 97.4%. Thus, the constrained LSQR method is also efficient for a surface error with some middle frequency errors. Compared with Wu et al.’s method [11], the constrained LSQR proposed in this study has the following features. 1. It achieves a high convergence rate without the need for a tool path weight factor and surface error weight factor. 2. It obtains a generally nonnegative dwell time map without the introduction of an extra removal amount. 3. It enlarges the optional range of the damper factor. 4. Using a small damper factor, it can achieve a satisfactory dwell time distribution and generally nonnegative dwell time, as well as a high convergence rate. 5. It saves computational time because it requires fewer iterations to reach the stopping criteria. 5. Large Scale Surface Error Matrix A.

Matrix Segmentation and Stitching

There is still trouble when applying this linear equation model in engineering: for a common surface error matrix (e.g., 512 × 512), and a tool path with 104 2754

APPLIED OPTICS / Vol. 54, No. 10 / 1 April 2015

points, the coefficient matrix A would have scale of ∼2.62 × 105  × 104 , which is too large to be constructed and operated on a personal PC. Previous research [11] often rarefied the surface error matrix by interpolation operation to be a scale of ∼50 × 50 to ∼100 × 100 (matrix A has scale of 2.5 × 103  × Pt to 104 × Pt ), which actually works but loses a lot of useful information that we are interested in. A distributed calculating method is proposed to address this problem. A large scale surface error matrix is first segmented as a series of submatrices. Then, we calculate the dwell time map for each submatrix, and stitch them together as the final dwell time map for the original surface error matrix. Note that the edge effect of each submatrix should be carefully handled. There are two ways to handle the edge of each submatrix. The first one does not make extension operation for the original surface error matrix, but makes an extension process for each submatrix according to the traditional methods as illustrated at the end of Section 2. This way may induce negative surface errors for some submatrices, and then induce some negative dwell time values, so each submatrix should be first subtracted by its minimal value, which would result in a discontinuous dwell time map. The other one is just making an extension for the original surface error matrix, and then it is subtracted by its minimal value. For each submatrix, its extension process is just enlarging the data range taken from the original surface error matrix, so that each submatrix and dwell time map are generally nonnegative. And this way can save time because it does not require an extension process for each submatrix. Thus, the second way is employed in the following simulations. B. Simulations for Feasibility Validation

A 200 mm × 200 mm square workpiece is assumed to be figured by a 10 × 10 mm TIF (Gaussian shape, PRR  1.0 λ∕ min , FWHM  5.0 mm). The original surface error map has scale of 500 × 500 [see Fig. 10(a)], with PV  14.657λ and RMS  2.53λ, which is a portion of the Peaks function in MATLAB. The TIF matrix was optimized to be a scale of 25 × 25. By matrix segmentation and the stitching process (surface error matrix was segmented to 10 × 10 submatrices, so that the coefficient matrix A is compatible on a PC), the dwell time maps obtained from constrained LSQR method (damper factor w  1) with the first and the second methods for edge extension are shown in Figs. 10(b) and 10(c), respectively. We can find, with the first edge extension method, that the dwell time map is obviously discontinuous, and is not suited for engineering. However, with the second edge extension method, the dwell time shown in Fig. 10(c) is continuous and much smoother, with a consistent shape with the original surface error map, which is practical in engineering. The residual error shown in Fig. 10(d) has PV  0.625λ and RMS  0.106λ, with a RMS convergence rate of 95.8%.

Fig. 10. (a) Original surface error map generated by a Peaks function with scale of 500 × 500, PV  14.657λ, and RMS  2.53λ. (b) Dwell time map using the first extension method. (c) Dwell time map using the second extension method. (d) Residual error map correlated with (c), PV  0.625λ, RMS  0.106λ.

There are some net-shape errors at the boundary of each submatrix in Fig. 10(d). They are mainly caused by the small stitching errors that are difficult to avoid due to the edge effect of each submatrix. We would like to point out that these small net-shape errors only occur in simulation process, and because the dwell time is continuous and smooth, these small net-shape errors would not sharpen the moving velocity of polishing tools; thus they would not degrade the results of the practical figuring process. The total simulation process costs 133.66 s in MATLAB on a PC with 3.3 GHz CPU and 4.0 GB memory. These results can validate the feasibility of the proposed distributed method for dealing with a large scale surface error matrix using the linear equation dwell time model. C.

Simulations for Efficiency Validation

A series of simulations for validating the efficiency of the linear equation model is conducted here as concluded in Table 1. The workpiece is also a Table 1.

Surface Error Matrix Scale 300 × 300 300 × 300 500 × 500 500 × 500 800 × 800 800 × 800 1000 × 1000 1000 × 1000

200 mm × 200 mm square, and we assume the scale of each original surface error matrix correlated with the workpiece increases from 300 × 300 to 1000 × 1000, the scale of the TIF matrix is set as 15 × 15 and 25 × 25, respectively, both with Gaussian shape, and FWHM equals the half size of the TIF. The scale of submatrix after matrix segmentation is 50 × 50; thus, the number of submatrices for each simulation is given as the fourth column of Table 1. The scale of the matrix coefficient A for each submatrix is 6400 × 6400 and 104 × 104 for the two TIFs, respectively, as given in the fifth column of Table 1. Without the proposed distributed method, the scales of coefficient matrix A for each simulation are given in the third column of Table 1, which are all too large to be compatible on the PC mentioned above, even the smallest one with scale of 108900 × 108900 (e.g., in MATLAB, a 105 × 105 matrix with double precision would cost ∼80 GB memory). By the distribution method, for each submatrix, the largest scale of coefficient matrix A is 104 × 104, which costs only ∼800 MB memory. It can be seen that the calculating time is generally proportional to the scale of the surface error matrix and the TIF matrix. The calculating time for the common applications in engineering is just several minutes, which is generally satisfactory, although it is longer than that using the numerical method in Ref. [9]. In addition, the convergence rates for each simulation are given in the last column of Table 1, and are all higher than 95%. These simulations can further validate the efficiency of the distributed method in dealing with large scale surface error matrices. 6. Conclusions

This study presents an efficient and robust linear equation dwell time model, with the following conclusions. 1. The linear equation model can get a robust and reliable solution with Tikhonov regularization. 2. A reasonable interval and a typical value of the damped factor can be determined automatically for traditional LSQR, which are dependent on the PRR of TIFs. 3. Compared with Wu’s method, the proposed method with the typical damped factor does not

Time Cost with Various Conditions

TIF Matrix Scale

Scale of A without Distributed Method

Number of Submatrices

Scale of A for Each Submatrix

Time Cost (s)

RMS Convergence Rate

15 × 15 25 × 25 15 × 15 25 × 25 15 × 15 25 × 25 15 × 15 25 × 25

108; 900 × 108; 900 122; 500 × 122; 500 280; 900 × 280; 900 302; 500 × 302; 500 688; 900 × 688; 900 722; 500 × 722; 500 1; 060; 900 × 1; 060; 900 1; 102; 500 × 1; 102; 500

6×6 6×6 10 × 10 10 × 10 16 × 16 16 × 16 20 × 20 20 × 20

6400 × 6400 104 × 104 6400 × 6400 104 × 104 6400 × 6400 104 × 104 6400 × 6400 104 × 104

55.65 79.64 93.25 133.66 146.21 195.88 177.44 226.34

96.1% 95.3% 96.9% 95.8% 97.3% 95.1% 97.7% 95.5%

1 April 2015 / Vol. 54, No. 10 / APPLIED OPTICS

2755

require multiple retrials for a proper damped factor, and it also does not need the support of an extra removal factor, path weight factor, and surface error factor. 4. A constrained LSQR method is proposed to improve the robustness of damped factors, which can obtain consistent dwell time maps with various damped factors. Compared with Wu’s method, it can overcome the adverse influence of small damped factors. And it saves computational time because it requires fewer iterations to reach the stopping criteria. 5. A distributed model (i.e., matrix segmentation and stitching) is proposed to cope with large scale surface error matrices. Compared with Wu’s method, it is compatible with common large surface error matrices on a common PC; thus, it substantially improves the practicality of the linear equation dwell time model in engineering. 6. For a surface error matrix with scale 500 × 500 to 1000 × 1000, the proposed distributed model costs ∼2–4 min , which is generally satisfactory. 7. The convergence rate of RMS is generally higher than 95% in simulations, which is sufficient in engineering. 8. This study promotes the linear equation dwell time model to be more reliable and more efficient in practical engineering. This work was supported by the National Natural Science Foundation of China (Grant Nos. 61308075 and 61222506) and the Specialized Research Fund for the Doctoral Program of Higher Education (Grant No. 20131101110026). References 1. R. A. Jones, “Computer control for grinding and polishing,” Photon. Spectra, 34–39 (1963). 2. W. Kordonski and D. Golini, “Progress update in magnetorheological finishing,” Int. J. Mod. Phys. B 13, 2205–2212 (1999).

2756

APPLIED OPTICS / Vol. 54, No. 10 / 1 April 2015

3. P. M. Shanbhag, M. R. Feinberg, G. Sandri, M. N. Horenstein, and T. G. Bifano, “Ion-beam machining of millimeter scale optics,” Appl. Opt. 39, 599–611 (2000). 4. D. D. Walker, D. Brooks, A. King, R. Freeman, R. Morton, G. McCavana, and S. W. Kim, “The ‘Precessions’ tooling for polishing and figuring flat, spherical and aspheric surfaces,” Opt. Express 11, 958–964 (2003). 5. W. Kordonski, A. Shorey, and A. Sekeres, “New magnetically assisted finishing method: material removal with magnetorheological fluid jet,” Proc. SPIE 5l80, 107–114 (2004). 6. R. A. Jones, “Optimization of computer controlled polishing,” Appl. Opt. 16, 218–224 (1977). 7. S. R. Wilson and J. R. McNeil, “Neutral ion beam figuring of large optical surfaces,” Proc. SPIE 818, 320–324 (1987). 8. L. C. Charles, C. M. Egert, and W. H. Kathy, “Advanced matrix based algorithm for ion beam milling of optical components,” Proc. SPIE 1752, 54–62 (1992). 9. Z. C. Dong, H. B. Cheng, and H. Y. Tam, “Modified dwell time optimization model and its applications in subaperture polishing,” Appl. Opt. 53, 3213–3224 (2014). 10. W. Deng, L. Zheng, Y. Shi, X. Wang, and X. Zhang, “Dwell-time algorithm based on matrix algebra and regularization method,” Opt. Precis. Eng. 15, 1009–1015 (2007). 11. J. F. Wu, Z. W. Lu, and H. X. Zhang, “Dwell time algorithm in ion beam figuring,” Appl. Opt. 48, 3930–3937 (2009). 12. L. Zhou, Y. F. Dai, X. H. Xie, C. J. Jiao, and S. Y. Li, “Model and method to determine dwell time in ion beam figuring,” Nanotechnol. Precis. Eng. 5, 107–112 (2007). 13. Z. C. Dong, H. B. Cheng, and H. Y. Tam, “Modified subaperture tool influence functions of a flat pitch polisher with reversecalculated material removal rate,” Appl. Opt. 53, 2455– 2464 (2014). 14. D. W. Kim, W. H. Park, S. W. Kim, and J. H. Burge, “Parametric modeling of edge effects for polishing tool influence functions,” Opt. Express 17, 5656–5665 (2009). 15. D. D. Walker, G. Y. Yu, H. Y. Li, W. Messelink, R. Evans, and A. Beaucamp, “Edges in CNC polishing: from mirrorsegments towards semiconductors, paper 1: edges on processing the global surface,” Opt. Express 20, 19787–19798 (2012). 16. P. J. Guo, H. Fang, and J. C. Yu, “Edge effect in fluid jet polishing,” Appl. Opt. 45, 6729–6735 (2006). 17. R. J. Marks II, “Gerchberg’s extrapolation algorithm in two dimensions,” Appl. Opt. 20, 1815–1820 (1981). 18. C. Paige and M. A. Saunders, “LSQR: an algorithm for sparse linear equations and sparse least squares,” ACM Trans. Math. Softw. 8, 43–71 (1982).

Robust linear equation dwell time model compatible with large scale discrete surface error matrix.

The linear equation dwell time model can translate the 2D convolution process of material removal during subaperture polishing into a more intuitional...
1MB Sizes 0 Downloads 8 Views