sensors Article

Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images Chong Fan 1,2 , Chaoyun Wu 1, *, Grand Li 3 and Jun Ma 1, * 1 2 3

*

School of Geosciences and Info-Physics, Central South University, Changsha 410083, China; [email protected] State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China SiChuan Remote Sensing Geomatics Institute, NO. 2, Jianshe Road, Longquanyi District, Chengdu 610100, China; [email protected] Correspondence: [email protected] (C.W.); [email protected] (J.M.); Tel.: +86-151-1117-5046 (C.W.); +86-132-0749-7496 (J.M.)

Academic Editors: Huajun Tang, Wenbin Wu and Yun Shi Received: 5 December 2016; Accepted: 4 February 2017; Published: 13 February 2017

Abstract: To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. Keywords: super resolution; point spread function (PSF); projections onto convex sets (POCS); remote sensing

1. Introduction Image resolution refers to the number of pixels contained in an image per unit area. This parameter is an important factor used to evaluate the quality of remote sensing images. The limitations of imaging systems and the external circumstances in obtaining images, including inherent sensor sampling frequency, defocusing, and atmospheric disturbances [1], result in low-quality images, which are blurred, misshapen, and exhibiting random noise. To solve these problems, two methods were proposed. First, the resolution can be enhanced by increasing the chip size. However, this approach is costly and cannot significantly improve the image resolution. Second, super-resolution (SR) reconstruction can be implemented using various algorithms. In the present study, time is used to compensate for space. Certain constraints or algorithms are employed to build a high-resolution (HR) image with Sensors 2017, 17, 362; doi:10.3390/s17020362

www.mdpi.com/journal/sensors

Sensors 2017, 17, 362

2 of 19

higher number of pixels, more details, and better image quality than those of the observed multiple low-resolution (LR) images [2]. The proposed approach demonstrates the advantage of obtaining huge information at minimum economic cost; hence, this approach has been widely used in many applications such as military monitoring and medical diagnosis. Moreover, SR algorithms provide a wide range of applications in agriculture. Kasturiwala reported that the SR reconstruction method can estimate some missing high-frequency details from an infected leaf image [3]. This method is most useful to agricultural experts in helping farmers detect exact leaf diseases and provide accurate remedial actions. In [4], researchers introduced a SR mapping method to produce a fine-spatial-resolution land cover map from coarse-spatial resolution remotely sensed imagery. The result of the SR reconstruction will be affected by the accuracy of the point spread function (PSF) estimation of images. Therefore, the research in [5] investigates the adequacy of a remote sensing instrument spatial resolution in monitoring crop growth in agricultural landscapes with different spatial patterns. Furthermore, the studies on SR reconstruction and the relationship between PSF in the HR and the LR images provide significant findings. The concept of SR reconstruction was presented in 1965 when the Harris–Goodman spectrum extrapolation method was proposed by Harris [6] and Goodman [7], and this concept has become a popular research topic in image processing. Reconstruction methods classified according to the numbers of LR images can be divided into two categories: reconstruction based on a single-frame image and reconstruction based on a sequence of images. Many of these algorithms utilize a single-frame LR input image reconstructed into HR image by modeling; these algorithms also use the matching mechanisms or some prior information about the image [8]. The HR image is estimated from a sequence of LR aliased images of the same object or scene [9,10]. These algorithms generally reconstruct an image with enhanced resolution, which exhibits tighter pixel density and better image detail, by using the aliased information of multiple LR images. The current study focuses on SR reconstruction by using the information obtained from multiple LR images. The popular methods include frequency domain and spatial domain algorithms. The first method is based on the shift property of the Fourier transform. The image is converted into the frequency domain to eliminate spectrum aliasing, obtain much of the missing high-frequency information, and improve the spatial resolution of the image. In particular, the popular methods mainly include spectral de-aliasing reconstruction algorithm, recursive least square method, and generalized sampling scheme. Frequency-domain algorithms are superior because they follow a simple theory and can be applied completely in parallel. However, these algorithms present the limitation of ignoring the prior knowledge in the spatial domain. The SR reconstruction approach uses complex observation models, which consider some spatial factors affecting the quality of the image, such as optical blur and motion blur, to reconstruct the image. Irani and Peleg [11,12] proposed the iterative back projection (IBP) algorithm. This approach can estimate the initial value of the HR image through some interpolation algorithms on a sequence of observed LR images. A set of simulated LR image arrays can be obtained comprehensively from the HR image by using the blurring model. Subsequently, the algorithm compares the observed LR images and the estimated LR images to achieve the correction value by taking a number of iterations to obtain the final HR image. However, the accuracy of the approach is low because the solution is not unique, and the prior information is difficult to apply. Stark and Oskoui [13] proposed the projection onto convex sets (POCS) method in 1987. In this method, a convex model considers a set of constraints that limit the SR feasible solutions (such as smoothness, energy boundedness, and consistency of data observations), and the intersection of sets is the final solution. POCS has become a significant method to solve the SR reconstruction problem. In the following years, the maximum likelihood (ML) algorithm [14] has been explored. Statistics showed that the method can provide an HR image through the expectation maximum algorithm. The maximum a posteriori probability (MAP) [15,16] is another SR approach with some mixed SR reconstruction methods (MAP/POCS algorithm) [17]. POCS algorithm considers a variety of degraded factors, including blur and movement; hence, this algorithm is important in solving the problem in SR reconstruction. Recently, Ogawa [18] proposed the POCS algorithm based on principal component analysis (PCA). Xi [19] improved

Sensors 2017, 17, 362

3 of 19

the initial image estimation by using the wavelet bicubic interpolation for POCS reconstruction algorithm; the experimental results are evident. Liang [20] presented a POCS algorithm based on text features; in this method, text features are added as constraints to preserve the edge details and smoothen the noise in the text images. Meanwhile, the blind SR method is proposed by Sroubek and Flusser Sensors [21] can blur estimation into SR by performing an advanced deconvolution task. 2017,incorporates 17, 362 3 of 19 The model is built by the sharpness of edge regions and the smoothness of smooth regions in the total initial image estimation by using the wavelet bicubic interpolation for POCS reconstruction variance of the image, as well as the prior information of the image ambiguity function. Subsequently, algorithm; the experimental results are evident. Liang [20] presented a POCS algorithm based on text the cross-iteration method is used to solvearethe model, and then to thepreserve PSF and image areand obtained. features; in this method, text features added as constraints theHR edge details Thus, the HR image can be reconstructed even if thethedegradation model and thebymodel’s smoothen the noise in the text images. Meanwhile, blind SR method is proposed Sroubekparameters and Flusser [21] can incorporates blur estimation into SRthat by performing an advanced deconvolution task. of the camera sensors are unknown [22,23]. Given the methods of estimating PSF are different, The model is built by the sharpness of edge regions and the smoothness of smooth regions in the total two algorithms are used for reconstruction to compare the results. variance of the as well POCS as the prior information of the image ambiguity function. In this work, animage, improved SR algorithm based on PSF estimation of Subsequently, LR remote sensing the cross-iteration method is used to solve the model, and then the PSF and HR image are obtained. images is proposed. Exploration of the relationship between the PSF of the HR image and the PSF of Thus, the HR image can be reconstructed even if the degradation model and the model’s parameters multipleof LR imagessensors is an are essential part of this algorithm. In this study, the formula is deduced the camera unknown [22,23]. Given that the methods of estimating PSF are different, and approved by the experiments conducted. The conclusions are provided in the succeeding two algorithms are used for reconstruction to compare the results. In this work, an improved POCS SR algorithm based on PSF estimation of LR remote sensing section. Moreover, the estimated PSF of the HR is embedded to the original POCS SR algorithm, is proposed. Exploration the relationship between the PSF of the HRmethod, image andblind the PSF and theimages reconstruction results of theofthree different SR methods (proposed SRofmethod, multiple LR images is an essential part of this algorithm. In this study, the formula is deduced and and bicubic interpolation method), in the simulated experiment and the real experiment, are compared. approved by the experiments conducted. The conclusions are provided in the succeeding section. Moreover, the estimated PSF of the HR is embedded to the original POCS SR algorithm, and the 2. Materials and Methods reconstruction results of the three different SR methods (proposed method, blind SR method, and bicubic interpolation method), in the simulated experiment and the real experiment, are compared.

2.1. Observation Model

2. Materials Methods Generally, SRand image reconstruction techniques present an inverse problem of recovering the HR image by degrading the LR images. The HR image is obtained under certain conditions, 2.1. Observation Model such as satisfying the theory of Nyquist sampling, and is affected by some of the inherent resolution Generally, SR image reconstruction techniques present an inverse problem of recovering the HR limitations of sensors in the acquisition process [24], including warping, blurring, subsampling operators, image by degrading the LR images. The HR image is obtained under certain conditions, such as and additive noise. observation model can be formulated, relatesresolution to the ideal HR satisfying theTherefore, theory of an Nyquist sampling, and (1) is affected by some of which the inherent image f limitations to the corresponding LRprocess images[24], gi . The modelwarping, can overcome thesubsampling inherent resolution of sensors ini-th theobserved acquisition including blurring, operators, andimaging additive noise. Therefore, an images observation modeldifferent (1) can besubpixel formulated, which relates limitation of the LR systems. The LR display shifts from each other thespatial ideal HR image toisthe corresponding -th observed LR images modelscene. can overcome becausetothe resolution very low to capture all the details of the. The original Finally, a goal the inherent resolution limitation of the LR imaging systems. The LR images display different image with denser pixel values and rich image information, called HR image, will be achieved: subpixel shifts from each other because the spatial resolution is very low to capture all the details of the original scene. Finally, a goal image with denser pixel values and rich image information, called gi = Di Hi Bi f + n HR image, will be achieved:

(1)

(1) and H = + calculated, g is the observed LR image, where f is the ideal undegraded image required to be i i represents the blur including relative camera-scene motion blur, sensorLRblur, atmospheric where is thematrix, ideal undegraded image required to be calculated, is the observed image, and the blur matrix, including camera-scene motion as blur, sensor blur, atmospheric turbulence,represents and optical blur. Generally, therelative blur matrix is modeled convolution with an unknown and optical the blur matrix is modeled as convolution with an unknown PSF to turbulence, estimate blurs [25]. blur. Bi Generally, is the warp matrix (e.g., rotation, translation scaling, and so on). PSF to estimate blurs [25]. is the warp matrix (e.g., rotation, translation scaling, and so on). The The relative movement parameters can be estimated using the subpixel shifts of the multiple LR relative movement parameters can be estimated using the subpixel shifts of the multiple LR images. images. Drepresents a subsampling and is the lexicographically ordered i represents a subsampling matrix,matrix, and is then lexicographically ordered noise vector.noise The vector. The observation model Figure observation modelis is illustrated illustrated inin Figure 1. 1.

Figure1.1.Observation Observation model. Figure model.

Sensors 2017, 17, 362

4 of 19

Image degradation occurs when the acquired image is corrupted by many factors. The image egradation process can be viewed as a linear invariant system, in which the noises can be ignored. The degradation model is described by Equation (2): ϕ = H∗ γ+ η ≈ H∗ γ

(2)

The model is composed of four main attributes: the original image without degradation γ, the degraded image ϕ, a PSF H, and some noises η. ∗ is the convolution operating symbol. To restore the quality of the image, H can be estimated using some PSF estimation methods, such as the knife-edge method. When the original image is processed by downsampling, the PSF h of the downsampled image is obtained. However, h does not apply to the model (2); hence, the relationship between H and h is derived in Section 2.3. The derived formula will be applied to the SR reconstruction model. 2.2. Principle of POCS SR Algorithm In this work, the POCS SR reconstruction algorithm is used to obtain high-quality remote sensing data, which can meet the requirements of agricultural data sources. The algorithm, which is simple and effective, is a collection theory of the image reconstruction method. Given the flexible space-domain observation model and the powerful prior knowledge embedding capability, the owned feasible region of the reconstructed image consists of an intersection consistency projective convexity set and a convex constraint set. The POCS algorithm [26] is an iterative operation; the operator of the corresponding convex constraint set projects the points in the solution space to the nearest point on the surface of the convex set. After a finite number of iterations, a solution to the intersection set that converges to the convex constraint set is finally found. The POCS SR algorithm is detailed as follows: Step 1: Estimate the image f 0 by using the linear interpolation method for LR images. Step 2: Compute the motion compensation of the pixel of each LR image. The correspondence between the LR image and the HR image is given by Equation (3): g ( m1 , m2 , l ) =



n1 ,n2

 f (n1 , n2 ) h n1 , n2 ; m10 , m20 , l + n(m1 , m2 , l )

(3)

where (m1 , m2 ) is the point in the LR image, and (n1 , n2 ) is the corresponding point in the HR image. 1. 2. 3.

Obtain the position of the pixel on the LR image of each frame g(m1 , m2 , l ) and on the HR image f (n1 , n2 ).  Calculate the parameter, h n1 , n2 ; m10 , m20 , l , which represents the range and the value of PSF according to the position of the pixel. Simulate the sampling process to obtain the simulated LR image. The observed LR image g(m1 , m2 , l ) can be constrained by a convex set Cn1 ,n1 ,k , as follows: Cn1 ,n1 ,k =

n

o f (m1 , m2 , l ) : r ( f ) (n1 , n2 , k ) ≤ δ0 (n1 , n2 , k) 0 ≤ n1 , n2 ≤ N − 1, k = 1, · · · , L

(4)

The projection P(n1 , n2 , k)[ x (m1 , m2 , l )] at any point x (m1 , m2 , l ) on C (n1 , n2 , k) is defined in Equation (5) as follows:

Sensors 2017, 17, 362

5 of 19

P(n1 , n2 , k)[ x (m1 , m2 , l )] =

4.

 r ( x) (n ,n ,k )−δ (n ,n ,k )   x (m1 , m2 , l ) + ∑ ∑1 h2 2 (n ,n0 ;o1 ,o 2,k) h(n1 , n2 ; m1 , m2 , l )   1 2 1 2 o1 o2    ( x )   r (n1 , n2 , k) > δ0 (n1 , n2 , k)     x ( m1 , m2 , l )  −δ0 (n1 , n2 , k) < r (x) (n1 , n2 , k) < δ0 (n1 , n2 , k)      r ( x) (n ,n ,k )−δ (n ,n ,k )  x (m1 , m2 , l ) + ∑ ∑1 h2 2 (n ,n0 ;o1 ,o 2,k) h(n1 , n2 ; m1 , m2 , l )   1 2 1 2 o1 o2     ( x ) r (n1 , n2 , k) < −δ0 (n1 , n2 , k)

Calculate the residuals r f (n1 , n2 , k) between the real image and the simulated image. The formula can be described by (6). r ( f ) (n1 , n2 , k) = g(n1 , n2 , k) − ∑ f (m1 , m2 , l )· h(n1 , n2 ; m1 , m2 , l )

5.

(5)

(6)

where h(n1 , n2 ; m1 , m2 , l ) is the impulse response coefficient, δ0 is the confidence level on the observed result. In this paper, δ0 = c δv , where the point δv is the standard deviation of the noise, and c ≥ 0 is determined by an appropriate statistical confidence range. These settings define HR images that are consistent with the observed LR image frames within a certain confidence range proportional to the observed noise variation. Correct the pixel value of the HR image according to the residuals.

Step 3: Repeat from Step 2 until convergence Given a projection operator, the estimated value fˆ(m1 , m2 , l ) of the HR image f (m1 , m2 , l ) can be obtained from all the LR images g(n1 , n2 , k) through many iterations, such as Equation (7): h i e fˆ(i) (m1 , m2 , l ) i = 0, 1, · · · fˆ(i+1) (m1 , m2 , l ) = Tλ T (7) e is the combination of all the relaxation projection operators associated with C (n1 , n2 , k). where T The initial estimate, f 0 (m1 , m2 , l ), is obtained by bilinear interpolation of the reference frame in the super-resolution grid. 2.3. Relationship between H and h In this section, the relationship between H and h is deduced and validated, and the simulation experiment is designed to verify the correctness of the formula. The derived formula is proven to be suitable for SR reconstruction. In the process of SR reconstruction, the PSF of the HR image can be estimated by the PSF of the LR image; and PSFhigh = k· PSFlow , where the downsampling ratio is k. When knife-edge areas are extracted from a remote sensing image with gray values from 0 to 1, the original signal along the gradient direction is represented by the unit step signal E. Given the PSF H and degrading and downsampling operator D , ε0 is finally expressed as the signal in the image. Equation (8) can be determined in the first downsampling model: ε0 ( x ) = Dk e ∗[ E( x ) ∗ H ( x )]

(8)

where e ∗ is the downsampling operating symbol, the downsampling operator Dk can be taken as a calculation of one-dimensional downsampling multiples of k, which is the equivalent of k compression from this function. The formula is shown in (9): x Dk e ∗ f (x) = f (9) k Therefore, when the variable is less than 0, the signal value of E is 0; when the variable is greater than 0, the signal value is 1. With general downsampling using the convolution operation method, the knife-edge areas can be mathematically formulated as follows:

Sensors 2017, 17, 362

6 of 19

ε0 ( x )

  = E xk ∗ H xk R +∞  = −∞ E xk − t H (t) dt Rx R +∞ = −k∞ 1· H (t) dt + x 0· H (t)dt k R xk = −∞ H (t)dt Rx  = −∞ 1k H kt dt

(10)

According to the first downsampling model, Equation (11) can be deduced: g( x, y)

= Dk e ∗[ F ( x, y) ∗ H ( x, y)] R +∞ R +∞ = Dk e ∗ −∞ −∞ F ( x − u, y − v) H (u, v)du dv    R +∞ R +∞ 1 x−u y−v u v H , = −∞ −∞ 2 F du dv , k k k k k

(11)

Based on the principle of knife-edge method, the PSF can be obtained from the derivative function of the edge spread function (ESF). The PSF is calculated as shown in Equation (12) by the knife-edge method: h( x )

0

= dεdx(x)  = 1k H xk

(12)

We can assume that the original signal can be restored effectively by the PSF with the values calculated by the knife-edge method. Deconvolution is used in the downsampling image g and the PSF initially. The deconvolution image requires rise sampling, so that the original image F can be obtained, as shown in Equation (13): F ( x, y) = U 1 e ∗[ g( x, y) ⊗ h( x, y)] k

(13)

where ⊗ is the deconvolution operation. Upsampling operator U 1 can be taken as the calculation of k

one-dimensional upsampling multiples of k12 , which is the equivalent of the 1/k compression from the two-dimensional function of two coordinate axes, as shown in (14):

U1e ∗ f ( x, y) = f (kx, ky) k

(14)

If Equation (14) is correct, Equation (15) must exist: g( x, y) = [Dk e ∗ F ( x, y)] ∗ h( x, y)

(15)

Moreover: x y , ∗ h( x, y) k k   R +∞ R +∞ x−u y−v = −∞ −∞ F , h(u, v)du dv k k    R +∞ R +∞ 1 x−u y−v u v = −∞ −∞ 2 F , H , du dv k k k k k

∗ F ( x, y)] ∗ h( x, y) = F [Dk e

(16)

Hypothesis (13) can be proven as tenable because Equations (11) and (16) yield the same results; hence, Equation (15) is correct. The original signal can be restored effectively by the PSF of the existing degraded image proof. The signal can be applied in the process of image SR reconstruction. The PSF of the LR image calculated by the knife-edge method can be applied in the SR reconstruction process.

Sensors 2017, 17, 362

7 of 19

The PSF is approximated by Gaussian functions with appropriate parameters because the PSF follows a Gaussian distribution [27–29]. The PSF Hi of the HR image can be written as follows: Hi ( x ) = √

1 2πΣ

e



1 x2 2Σ2

(17)

The PSF hi of the LR image can be expressed using Equation (18): hi ( x ) = √ Sensors 2017, 17, 362

1 2πσ

e



1 x2 2σ2

(18) 7 of 19

Therefore, the relationship between the Gaussian function parameters and the PSF of the Hi and Therefore, relationship between the Gaussian function parameters theΣPSF of the and hi images can bethe deduced from Equation (12), as shown in Equations (20), and where is the Gaussian ℎ images can be deduced from Equation (12), as shown in Equations (20), where is the Gaussian function parameter of Hi , and σ is the Gaussian function parameter of hi : function parameter of , and is the Gaussian function parameter of ℎ : 1 1 1 − 1 x2 − 1 ( x )2 √ 1 e 2σ2 = 1√ 1 e 2Σ2 (k ) (19) =k 2πΣ (19) 2πσ √2 √2 σ ==kΣ

(20) (20)

The experimentis is conducted to verify the proposed computation The simulation experiment conducted to verify the proposed computation formula.formula. A manAmade man-made knife-edge is drawn using computer language, andthe thefigure figure is is degraded knife-edge figurefigure is drawn using computer language, and degraded by by convolution by using the PSF, which is estimated by Gaussian functions. The selection process of the convolution by using the PSF, which is estimated by Gaussian functions. The selection process of the knife-edge knife-edgearea areaisisshown shownin inFigure Figure2.2.The Theseven sevenGaussian Gaussianfunction function parameters parameters were were set set at at0.5, 0.5,0.75, 0.75, 1.0, 1.0,1.5, 1.5,1.75, 1.75,2.0, 2.0,and and2.5. 2.5.The The resized resized image image by by resampling resampling is is presented presented with different multiples of k set setatat0.5, 0.5,1.5, 1.5,2.0, 2.0,2.5, 2.5,and and3.0. 3.0.The Theknife-edge knife-edgearea areamust mustbe beselected selectedtotoobtain obtainthe theGaussian Gaussianfunction function parameter parameterof ofthe thesampled sampledimage; image;the thesize sizeofofthe theregion regionisis1515××15 15pixels. pixels.According Accordingto tothe theformula formula derived, derived,the theresult resultisisinincorrespondence correspondencewith withthe theoriginal originalparameter. parameter.The Theresults resultsare aresummarized summarizedin in Table Table1,1,and andFigure Figure33shows showsthat thatthe theGaussian Gaussianfunction functionparameters parametersbetween betweenthe theoriginal originaland andscaled scaled image imagebasically basicallymeet meetthe thelinear linearrelationship. relationship.

The size of the region is 15 × 15

Figure 2. 2. Selection Selection process process of of knife-edge knife-edge area. area. The The resized resized image image by by resampling resamplingwith withmultiples multiplesof ofk. . Figure

The results in Figure 3 and Table 1 show the PSF Gaussian function parameters between the The results in Figure 3 and Table 1 show the PSF Gaussian function parameters between the images before the downsampling and after satisfying the linear relationship when the image is scaled images before the downsampling and after satisfying the linear relationship when the image is scaled at different scales of . The ratio of the scaling parameter variation to the original parameter variation at different scales of k. The ratio of the scaling parameter variation to the original parameter variation is similar to . The ratio satisfies the formula which was just deduced in Equation (20). is similar to k. The ratio satisfies the formula which was just deduced in Equation (20).

Sensors 2017, 17, 362

8 of 19

8 7

Sensors 2017, 2017, 17, 17, 362 362 Sensors

of 19 19 88 of

6

Gaussian function parameters after scaling Gaussian function parameters after scaling

58 47 36 25 14 03 2

0

1 k=1.5 0

0.5

1

1.5

2

2.5

3

Original Gaussian function parameters k=0.5

k=2

k=2.5

k=3

0.5 1 1.5two parameters. 2 2.5 Figure03. Relationship between the

3

Original Gaussian function parameters Tablek=1.5 1. Gaussian function after scaling. k=3 k=0.5 parameters k=2 before andk=2.5 k σ ∑

Figure 3. Relationship between the two parameters. Figure 3. Relationship between the two parameters. k = 0.5 k = 1.5 k=2 k = 2.5 Table 1. Gaussian function parameters before and after scaling. Table 1. Gaussian function parameters before and after scaling.

σ k 0.75 Σ

0.5

σ ∑

1 1.5 0.5 1.75 0.75 2 1 2.5 1.5

0.5 0.75 1 1.5 1.75 2 2.5

0.3154 k 0.4375 k = 0.5 0.5476 0.7861 0.3154 0.9101 0.4375 1.0336 0.5476 1.2775 0.7861

k = 0.5

0.8556

k = 1.5

1.1739 k = 1.5 0.3154 1.5337 0.8556 0.4375 1.1739 2.2715 0.5476 1.5337 0.8556 0.7861 2.642 2.2715 1.1739 2.642 0.9101 3.0144 1.0336 1.5337 3.0144 3.7623 1.2775 3.7623 2.2715

1.1164 k=2 1.5565 k=2 1.1164 2.0346 1.5565 3.0206 2.0346 1.1164 3.5195 3.0206 1.5565 3.5195 4.0175 4.0175 2.0346 5.0135 5.0135 3.0206

1.401 k = 2.5 k=3 1.944 k = 2.5 1.4012.54611.6878 1.944 2.3341 3.7757 2.5461 3.0514 1.401 3.77574.39674.5295 1.944 5.2731 4.39675.019 5.0192.54616.0212 6.2669 6.2669 7.5232 3.7757

k=3

1.6878 2.3341 k=3 3.0514 4.5295 1.6878 5.2731 2.3341 6.0212 3.0514 7.5232 4.5295

The correctness correctness of Equation Equation (20) in in the the2.642 real image image can can3.5195 be proven proven by by the the following experiments. experiments. 1.75 0.9101 (20) 4.3967 5.2731 The of real be following The experimental images with some knife-edge areas can be selected by the ADS 40 remote sensing The experimental images with some knife-edge selected by the ADS 40 remote sensing 2 1.0336 3.0144areas can be 4.0175 5.019 6.0212 image with with the size of 200 ×200 200(Example (Example1)1) andthe theunmanned unmanned aerial vehicle (UAV) image image with the the image and (UAV) with 2.5the size of 200 ×1.2775 3.7623 5.0135aerial vehicle 6.2669 7.5232 size of 800 × 800 (Example 2). The experimental data are shown in Figure 4. size of 800 × 800 (Example 2). The experimental data are shown in Figure 4.

The correctness of Equation (20) in the real image can be proven by the following experiments. The experimental images with some knife-edge areas can be selected by the ADS 40 remote sensing image with the size of 200 × 200 (Example 1) and the unmanned aerial vehicle (UAV) image with the size of 800 × 800 (Example 2). The experimental data are shown in Figure 4.

(a)

(b) Figure Figure 4. 4. Cont. Cont.

(a)

(b) Figure 4. Cont.

Sensors 2017, 17, 362 Sensors 2017, 2017, 17, 17, 362 362 Sensors

9 of 19 of 19 19 99 of

(d) (d)

(c) (c)

Figure 4. Experimental Experimental data. data. (a) (a) Original Original ADS ADS 40 image, image, the the image image enclosed enclosed in in red red box box is is the the knifeknifeFigure Figure 4. 4. Experimental data. (a) Original ADS 4040image, the image enclosed in red box is the knife-edge edge area which is used to estimate the Gaussian function parameter (σ); (b) Four downsampled LR edge area which is to used to estimate the Gaussian function parameter (b)downsampled Four downsampled LR area which is used estimate the Gaussian function parameter (σ); (b)(σ); Four LR images images of the ADS 40 image; (c) Original UAV image; (d) Four downsampled LR images of the UAV images of the 40 (c) image; (c) Original UAV(d) image; Four downsampled LR of images of the UAV of the ADS 40ADS image; Original UAV image; Four(d) downsampled LR images the UAV image. image. image.

First, four LR LR images imagesmust mustbe beacquired acquiredfrom fromthe thedownsampling downsamplingmodel, model, shown in Figure 5, First, four four asas shown in Figure Figure 5, in in First, LR images must be acquired from the downsampling model, as shown in 5, in which original image becomes a series of image sequences with size 100 × 100. which thethe original image becomes series of LR LRLR image sequences with size of of 100 100. which the original image becomes aa series of image sequences with size of 100 ×× 100.

(a) (a)

(b) (b)

Figure 5. 5. Graphic Graphic of aaa downsampling downsampling model. model. (a) (a) Original Original image; (b) (b) four four downsampled downsampled LR LR images. images. Figure 5. Graphic of of downsampling model. (a) Original image; image; (b) four downsampled LR images. Figure

Second, the the knife-edge knife-edge areas areas with with four four LR LR images images and and the the experimental experimental image image are are estimated estimated Second, Second, the knife-edge areas with four LR images and the experimental image are estimated using using the the PSF PSF estimation estimation based based on on slant slant knife-edge knife-edge method. method. The The PSFs PSFs can be be obtained obtained separately. separately. using the PSF estimation based on slant knife-edge method. The PSFs can becan obtained separately. Finally, an oversampling oversampling rate is is set at at 2, 2, indicating indicating that that the the original original image image is is zoomed zoomed out out in half half Finally, Finally, an an oversamplingrate rate isset set at 2, indicating that the original image is zoomed in out in in the experiment. Based on the relationship between the images before the downsampling and after in theinexperiment. BasedBased on the between the images beforebefore the downsampling and after half the experiment. onrelationship the relationship between the images the downsampling and deducing, the Gaussian function parameter σ, after downsampling, must be half of the original. The deducing, the Gaussian function parameter σ, after downsampling, must be half of the original. The after deducing, the Gaussian function parameter σ, after downsampling, must be half of the original. results are shown in Table 2. The value coincides with the equation, proving that SR image results are shown in Table 2. 2. TheThe value coincides with The results are shown in Table value coincides withthe theequation, equation,proving provingthat that SR SR image image reconstruction based based on on the the PSFs PSFs of of LR LR images images is is possible. possible. reconstruction reconstruction based on the PSFs of LR images is possible. Table 2. 2. Gaussian function function parameter (( )) of of the image image before and and after downsampling. downsampling. Table Table 2. Gaussian Gaussian function parameter parameter (σ) of the the image before before and after after downsampling. Estimate of of Downsampling Downsampling Real Result Result of of Each Each Estimate Real Original Image Image (σ) (σ) Original Estimate of Downsampling Real Result of Each Image (σ/2) Downsampling Image Image (σ/2) Downsampling Image Original Image (σ) Image (σ/2) Downsampling 0.2979 Image 0.2979 0.2979 0.3067 0.3067 Example 11 0.5894 0.2992 0.3067 Example 0.5894 0.2992 0.3102 Example 1 0.5894 0.2992 0.3102 0.3102 0.3098 0.3098

Sensors 2017, 17, 362

10 of 19

Table 2. Cont.

Sensors 2017, 17, 362

Original Image (σ)

Example 2 Example 2

10 of 19

Estimate of Downsampling Table 2. Cont. Image (σ/2)

Real Result of Each Downsampling Image

0.1905 0.1905

0.2235 0.2235 0.2158 0.2158 0.1884 0.1923 0.1884 0.1923

0.3809 0.3809

2.4. PSF Estimation of Low-Resolution Remote Sensing Images 2.4. PSF Estimation of Low-Resolution Remote Sensing Images The optical information of the remote sensing image is blurred in the process of the image capture Theofoptical information of the remote sensing is blurred thesatellite process ofthe theCCD image because the relative motion between the object beingimage photographed andinthe and or capture because of the relative motion between the object being photographed and the satellite and the atmosphere turbulence. The PSF of an imaging platform can represent the response of an imaging the CCD the atmosphere turbulence. PSF of an imaging can represent response system toor a point source. Therefore, the The calculation of the PSF inplatform the acquired image is the a significant of an imaging system to a point source. Therefore, the calculation of the PSF in the acquired image is step to restore the ideal remote sensing images. The blurring process [30], as a convolution of an image, a shown significant step to (21): restore the ideal remote sensing images. The blurring process [30], as a is in Equation convolution of an image, is shown in Equation (21): g( x, y) = f ( x, y) ∗ h( x, y) = ( , ) = ( , ) ∗ ℎ( , ) =

Z +∞ Z +∞ −∞

−∞

f (α, β)·h( x − α, y − β)dαdβ ( , ) ∙ ℎ( − , − )

(21) (21)

where convolution operator, h( x, ℎ( y) represents the PSF, x, y) is( the x, y) is where ∗∗isisthethe convolution operator, , ) represents thef (PSF, , )original is the image, originalg(image, the of the image, andimage, α and βand are the filters. ( degradation , ) is the degradation of the blurring are the blurring filters. However, even when the PSF [31] is measurable, it is influenced by some unpredictable conditions; However, even when the PSF [31] is measurable, it is influenced by some unpredictable hence, many methods are proposed to solve the problem. The most common methods in PSF estimation conditions; hence, many methods are proposed to solve the problem. The most common methods in are the knife-edge [32], spotlight [33], and pulse [34] methods. In addition, the knife-edge method PSF estimation are the knife-edge [32], spotlight [33], and pulse [34] methods. In addition, the knifeis the method most commonly employed method. The principles of the typical knife-edge method [35,36] edge is the most commonly employed method. The principles of the typical knife-edge and the ideal knife-edge area are shown in Figure 6. The knife-edge area is assumed as a square area, method [35,36] and the ideal knife-edge area are shown in Figure 6. The knife-edge area is assumed and knife edge goes the goes center of the area. Each row ofarea. the knife-edge satisfies formula as a the square area, and thethrough knife edge through the center of the Each row of the knife-edge (22), in which the value of the image greater than the edge boundary line x is 1, and the value less 0 boundary line satisfies formula (22), in which the value of the image greater than the edge is 1, than x is zero [36,37]. However, the typical knife-edge method for point spread function estimation is 0 and the value less than is zero [36,37]. However, the typical knife-edge method for point spread limited by edge slant angle. The knife-edge in the image must be parallel to the sampling direction function estimation is limited by edge slant angle. The knife-edge in the image must be parallel to the and the slant angle should beslant within 8◦ [32]. Under circumstances, slant knife-edges have certain sampling direction and the angle should be most within 8 [32]. Under most circumstances, slant slope to the direction of the ideal ones; thus, the ideal knife-edge cannot be determined in all situations. knife-edges have certain slope to the direction of the ideal ones; thus, the ideal knife-edge cannot be Qin et al. [38] the Qin PSF et byal.a [38] robust methodthe to PSF solve problem in the knife-edge determined in estimated all situations. estimated bythis a robust method to typical solve this problem method; they built a mathematical model of the relationship between the line spread function (LSF) in the typical knife-edge method; they built a mathematical model of the relationship between the estimated by the typical knife-edge method and the real PSF. Although an accurate PSF is obtained, line spread function (LSF) estimated by the typical knife-edge method and the real PSF. Although an the computed PSF still contains an error because thecontains algorithm discretethe function to derive accurate PSF is obtained, the computed PSF still an uses errorthe because algorithm uses the the subpixel error: to derive the subpixel error: ( discrete function 1 x > x0 1 > f ( x, y) = (22) ( , ) = 0 x ≤ x0 (22) 0 ≤

Figure 6. 6. Ideal area. Figure Ideal knife-edge knife-edge area.

In this study, a novel slant knife-edge method is used because this method fits the LSF directly In this study, a novel slant knife-edge method is used because this method fits the LSF directly and ensures the evenness of the edge spread function (ESF) sample to improve the accuracy of the and ensures the evenness of the edge spread function (ESF) sample to improve the accuracy of the PSF estimation. Figure 7 illustrates the process of the novel slant knife-edge method in a simplified PSF estimation. Figure 7 illustrates the process of the novel slant knife-edge method in a simplified sequence flow diagram. However, the area can be searched using several methods, and the three following requirements must be satisfied: 

The area cannot be extracted from the borders of the image to avoid the noise around the borders.

Sensors 2017, 17, 362

11 of 19

sequence flow diagram. However, the area can be searched using several methods, and the three following requirements must be satisfied:

• • •

The area Sensors 2017,cannot 17, 362 be extracted from the borders of the image to avoid the noise around the 11 ofborders. 19 The area Sensors 2017,must 17, 362be excellent in linearity to ensure the accuracy of the PSF estimation. 11 of 19  The area must be excellent in linearity to ensure the accuracy of the PSF estimation. Evident gray value differences between two sides of the edge to reduce the influence of the noise  Evident gray differences betweentotwo sidesthe of the edge to influence of the noise The mustvalue be excellent in linearity ensure accuracy ofreduce the PSFthe estimation. must be area obtained. must be obtained. 

Evident gray value differences between two sides of the edge to reduce the influence of the noise must be obtained. The novel slant knife-edge method The novel slant knife-edge method

The knife-edge line segment Thedetection knife-edge line segment detection

Sampling of ESF Sampling of ESF

Stratified resampling of Stratified ESF resampling of ESF

Sampling and fitting of LSF Sampling and fitting of LSF

The twodimensional The twoPSF dimensional PSF

Figure 7. Process of the novelslant slantknife-edge knife-edge method sequence flowflow diagram. Figure 7. Process of the novel methodinina asimplified simplified sequence diagram. Figure 7. Process of the novel slant knife-edge method in a simplified sequence flow diagram.

The original data and results of the experiment are displayed in Figure 8. The measurement area

The original data and results ofknife-edge the experiment displayed in Figure 8.knife-edge The measurement and The three examples extracted areas are areare also shown. Most themeasurement areas original dataof and results of the experiment displayed in Figure 8.ofThe area area and threesuccessfully examples and of extracted knife-edge areas are also shown. Most of the knife-edge extracted the process of removing the weak-related areas are displayed. Follow upsareas and three examples of extracted knife-edge areas are also shown. Most of the knife-edge areas extracted successfully and the process of removing the weak-related areas are displayed. Follow are described in detail. ESF sampling, ESF denoising, ESF resampling, and LSF sampling results are ups extracted successfully and the process of removing the weak-related areas are displayed. Follow ups shown in Figure 9. are described in detail. ESF sampling, ESFresampling, resampling, and sampling results are described in detail. ESF sampling,ESF ESFdenoising, denoising, ESF and LSFLSF sampling results are are shown in Figure 9. 9. shown in Figure

(a) (a)

(b)

(c)

(d)

(b)Measurement area; (b); (c); and (d)(c) (d) Figure 8. (a) are some examples of extracted knife-edge areas. Figure 8. (a) Measurement area; (b); (c); and (d) are some examples of extracted knife-edge areas.

Figure 8. (a) Measurement area; (b); (c); and (d) are some examples of extracted knife-edge areas.

Sensors 2017, 17, 362

12 of 19

Sensors 2017, 17, 362

12 of 19

0.14

0.14

0.12

0.12

0.1

0.1

0.08

0.08

0.06

0.06

0.04

0.04

0.02 -8

-6

-4

-2

0

2

4

6

8

0.02 -8

-6

-4

-2

pixel

0

2

4

6

8

2

4

6

8

pixel

(a)

(b)

0.14

0.045 0.04

0.12 0.035 0.03

0.1

0.025 0.08 0.02 0.015

0.06

0.01 0.04 0.005 0.02

0 -8

-6

-4

-2

0

pixel

(c)

2

4

6

8

-8

-6

-4

-2

0

pixel

(d)

Figure 9. (a) Results of ESF sample; (b) ESF denoised sample; (c) ESF resample; (d) LSF sample.

Figure 9. (a) Results of ESF sample; (b) ESF denoised sample; (c) ESF resample; (d) LSF sample.

3. Results and Discussion

3. Results and Discussion 3.1. Examples of Simulated Images

3.1. Examples of Simulated Images

The experimental results did not demonstrate the effectiveness of the SR reconstruction

The experimental resultsbecause did not of demonstrate of theTherefore, SR reconstruction algorithms algorithms qualitatively the lack ofthe theeffectiveness ideal HR image. the design of a simulation experiment is necessary by applying three different the SR approaches and comparing the qualitatively because of the lack of the ideal HR image. Therefore, design of a simulation experiment difference these reconstructed images and theand original image bythe thedifference peak valuebetween signal-to-these is necessary bybetween applying three different SR approaches comparing noise ratioimages (PSNR)and and the the mean square error [39]. value signal-to-noise ratio (PSNR) and the reconstructed original image by(MSE) the peak In this section, two series of comparative experiments are designed to evaluate the correctness mean square error (MSE) [39]. of the deduced formula presenting the relation of the PSF Gaussian function parameter before and In this section, two series of comparative experiments are designed to evaluate the correctness after SR reconstruction, as well as to compare the differences among the three reconstruction of themethods, deducednamely, formula the relation of method, the PSF and Gaussian parameter thepresenting proposed method, blind SR bicubicfunction interpolation method,before on the and after SR reconstruction, as well as to compare the differences among the three reconstruction original HR image. Moreover, given PSNR and MSE as the performance evaluation indicators,methods, the namely, the proposed blindofSR bicubic interpolation method, on the original experiment verifiesmethod, the efficiency themethod, SR imageand reconstructed by the modified algorithm. experimental process designed HR image.The Moreover, given PSNRisand MSE as follows: the performance evaluation indicators, the experiment verifies the efficiency of the SR image reconstructed the modified algorithm. 1. The experimental image with some knife-edgebyareas can be selected by the ADS 40 remote The sensing experimental process is designed as follows: image with the size of 314 × 314, as shown in Figure 10a. 1. 2.

3.

2.

Referencing to the SR observation model, the original blurred image with given low pass and

The downsampled experimental image with2some knife-edge areas can with be selected 40asremote with factor generated four LR images the sizeby ofthe 157ADS × 157, shownsensing in image with the size of 314 × 314, as shown in Figure 10a. Figure 10b. One of the LR simulated images is shown in Figure 10c. These four images correspond to SR the observation actual transformation parameters the reference as shown in Table where Dx to the model, the original of blurred image image, with given low pass and3,downsampled the actual four offsetLR in images the horizontal direction the×simulated LR image, and Dy is the withrepresents factor 2 generated with the size ofof157 157, as shown in Figure 10b. One of actual offset in the vertical direction. is the rotation experiment the LR simulated images is shown in D Figure 10c. Theseangle fourbecause imagesthis correspond tomainly the actual considered translation; hence, thereference rotation angle is 0.asThe default pixels and transformation parameters of the image, shown in units Tableare 3, where Dx degrees. represents the actual offset in the horizontal direction of the simulated LR image, and Dy is the actual offset in the vertical direction. Dθ is the rotation angle because this experiment mainly considered translation; hence, the rotation angle is 0. The default units are pixels and degrees. The knife-edge areas with four LR images are estimated using the PSF estimation based on slant knife-edge method, which is an accurate method. PSFs can then be obtained separately.

Sensors 2017, 17, 362

4.

5.

6.

13 of 19

According to the derived formula (20), the PSF must be multiplied by the downsampling factor 2. Afterward, the POCS method with the estimated 2*PSF is used to reconstruct the HR image from these four downsampling LR images. Sensors 2017, 17, 362 13 of 19 The blind SR and bicubic interpolation methods are used for the comparative experiments. The similarity between the experimental image and the resulting images of the three methods are observed. Evaluation of experimental results.

Sensors 2017, 17, 362

13 of 19

(a)

(b)

(c)

Figure 10. Simulation results. (a) Original image; (b) four simulated LR images; (c) one of the LR interpolated images, with the same size as that of the original image. Table 3. Transformation parameters of the simulation image. Dx Dy D (b) (c) 1 0.5 0.5 0 1.2 1.2 0 Figure 10. 10. Simulation Simulation 2results. of the the LR LR Figure results. (a) (a) Original Original image; image; (b) (b) four four simulated simulated LR LR images; images; (c) (c) one one of 3 0.4 0.4 0 interpolated images, images, with with the the same same size size as as that of the original image. image. interpolated 4 1.8that of the original 1.8 0

(a)

Table 3. 3. Transformation Transformation parameters parameters of of the the simulation simulation image. The knife-edge Table areas with four LR images are estimated using the PSFimage. estimation based on slant knife-edge method, which is an accurate method. PSFs can then be obtained Dx Dy Dseparately. Dx Dy Dθ 4. According to the derived formula (20), the PSF must be multiplied by the downsampling factor 1 0.5 0.5 0 2. Afterward, the POCS method with the estimated 2*PSF is used to reconstruct the HR image 2 0 1 1.2 0.5 0.5 1.2 0 from these four3downsampling2LR0.4 images. 0 1.2 1.2 0.4 0 5. The blind SR and methods are 3 1.8 0.4 0.4 used 0 the comparative 4 bicubic interpolation 1.8 for 0 experiments. The 4 1.8 1.8 0 similarity between the experimental image and the resulting images of the three methods are observed. areas with four LR images are estimated using the PSF estimation based on slant The knife-edge 6. Evaluation of experimental results.

3.

3.

knife-edge method, which is an accurate method. PSFs can then and be obtained separately. No significant detailed difference between the original image the modified reconstructed No significant detailed difference between the original image and the modified reconstructed 4. According to the derived formula (20), the PSF must be multiplied by the downsampling factor image in Figure 11 can be observed. The results testify that the algorithm can effectively reconstruct image in Figure 11 can be observed. The results testify that the algorithm can effectively reconstruct 2. Afterward, with the images estimated 2*PSF is used to reconstruct algorithm the HR image images. However,the thePOCS detailsmethod of the resulting of the blind SR reconstruction and images. However, the details of theLR resulting images of the blind SR reconstruction algorithm and the from these four downsampling the bicubic interpolation algorithm are images. inconsistent with the original image in terms of heavy noise, bicubic interpolation algorithm are inconsistent with the original image in terms of heavy noise, 5. The blind SR and bicubic interpolation methods are used for the comparative experiments. The aliasing, excessive sharpening, and so on. Nevertheless, both algorithms can sharpen edges and receive aliasing, excessive sharpening, and so on. Nevertheless, both algorithms can sharpen edges and between the experimental image and the resulting images of the three methods are largesimilarity amounts of amounts information. receive large of information. observed. 6. Evaluation of experimental results. No significant detailed difference between the original image and the modified reconstructed image in Figure 11 can be observed. The results testify that the algorithm can effectively reconstruct images. However, the details of the resulting images of the blind SR reconstruction algorithm and the bicubic interpolation algorithm are inconsistent with the original image in terms of heavy noise, aliasing, excessive sharpening, and so on. Nevertheless, both algorithms can sharpen edges and receive large amounts of information.

(a)

(b) Figure 11. Cont.

Figure 11. Cont.

Sensors 2017, 17, 362

14 of 19

Sensors 2017, 17, 362

14 of 19

(c)

(d)

Figure 11. Simulation results of experiments with the three methods. The images enclosed in red box

Figure 11. Simulation results of experiments with the three methods. The images enclosed in red box are the details of the three different algorithms. (a) Original image; (b) image based on the proposed are the details of three different (a) Originalalgorithm; image; (b)and image basedbased on theonproposed algorithm; (c)the image based on thealgorithms. blind SR reconstruction (d) image the algorithm; (c) image based on the blind SR reconstruction algorithm; and (d) image based on the bicubic interpolation algorithm. bicubic interpolation algorithm. Table 4 demonstrates that the proposed method contains less signal distortion and substantially surpasses the blind SR reconstruction and themethod bicubic interpolation in terms of thesubstantially highest Table 4 demonstrates that the proposed contains lessalgorithms signal distortion and PSNR and the minimum MSE. Moreover, the results prove that the POCS method with the estimated surpasses the blind SR reconstruction and the bicubic interpolation algorithms in terms of the highest PSF can improve the recovery image information and achieve a good performance of SR PSNRreconstruction. and the minimum MSE. Moreover, the results prove that the POCS method with the estimated

PSF can improve the recovery image information and achieve a good performance of SR reconstruction. Table 4. Evaluation results of the experiment.

Table 4. Evaluation results of the experiment.

Image Image based on the proposed algorithm Image Image based on the blind SR reconstruction algorithm Image based on the interpolation algorithm Image based on bicubic the proposed algorithm

Image based on the blind SR reconstruction algorithm 3.2. Examples of Realon Images Image based the bicubic interpolation algorithm

PSNR 96.7904 PSNR 54.5366 81.8529 96.7904

MSE 1.3616 × 10 MSE 2.2884 × 10 4.2441× × 10 10−5 1.3616

54.5366 81.8529

2.2884 × 10−1 4.2441 × 10−4

In this section, the proposed algorithm is applied in practice. The experiments are designed to

3.2. Examples of Real Images investigate the effect of SR reconstruction of the POCS method with the estimated PSF in the agricultural application. Given that data sources in agriculture use HR remote sensing images, the

In this section, the proposed algorithm is applied in practice. The experiments are designed agricultural region of the GF-2 image is selected as the experimental data with the image size of 400 to investigate of SRare reconstruction theofPOCS methodimages. with the PSFthein the × 400. The the set ofeffect LR inputs the overlappedof areas two adjacent The estimated other data are agricultural application. that data sources in of agriculture use HR remote sensing images, UAV images with size ofGiven 500 × 500 to prove the stability the proposed algorithm. The experimental the agricultural region ofwith the the GF-2 image is selected as and the experimental data withreconstruction the image size of results are compared blind SR reconstruction the bicubic interpolation verify theset effectiveness. reference image is two unavailable, qualityThe metrics 400 ×to400. The of LR inputsBecause are thethe overlapped areas of adjacentthe images. otherPSNR data or are the MSE cannot be used to compare the advantages of the three algorithms. Therefore, we choose a noUAV images with size of 500 × 500 to prove the stability of the proposed algorithm. The experimental reference metric with , which react a natural way to thethe presence of interpolation noise and blur,reconstruction to provide a results are compared thecan blind SRinreconstruction and bicubic to quantitative measure of true image. And its value drops means the variance of noise rises, and the verify the effectiveness. Because the reference image is unavailable, the quality metrics PSNR or MSE image becomes blurry [40]. Simultaneously, manual visual interpretation also is a criterion method. cannot be used to compare the advantages of the three algorithms. Therefore, we choose a no-reference In the first set of experiments, the knife-edge areas are extracted using the novel slant knife-edge metric Q, which can react in a natural way to the presence of noise and blur, to provide a quantitative method. The experimental data and details are shown in Figure 12. However, the edge of the measure of trueland image. And its value dropsismeans the variance of noise rises, and from the image becomes agriculture is unclear, and the house fuzzy, thereby preventing the researchers surveying blurry [40]. Simultaneously, manual visual interpretation also is a criterion method. the area accurately. In the first set of experiments, the knife-edge areas are extracted using the novel slant knife-edge method. The experimental data and details are shown in Figure 12. However, the edge of the agriculture land is unclear, and the house is fuzzy, thereby preventing the researchers from surveying the area accurately.

Sensors 2017, 17, 362

15 of 19

Sensors 2017, 17, 362

15 of 19

Sensors 2017, 17, 362

15 of 19

(a)

(b)

(a)

(b)

(c)

(d)

(e)

(c) experimental data; (c) detail (d) the road; (d) detail of (e) Figure (a,b) are the thehouse; house;(e) (e)detail detailofofthe Figure 12.12. (a,b) are the experimental data; (c) detail ofofthe road; (d) detail of the the paddy fields. paddy fields. Figure 12. (a,b) are the experimental data; (c) detail of the road; (d) detail of the house; (e) detail of the paddy fields.

The knife-edge areas are shown in Figure 13. The knife-edge region selection is based on the The knife-edge areas are shown in Figure 13.Two The knife-edge region selection(c)isfrom based on the relevant coordinate of the knife-edge area. areselection selected a total Thecenter knife-edge areas are shown in Figure 13. Theknife-edge knife-edgeareas region is based on the relevant center coordinate the knife-edge area. Two knife-edge selected (c) from a total of of six knife-edge areas (b),ofas in Table 5. Experimental dataareas showare that the center coordinates relevant center coordinate of shown the knife-edge area. Two knife-edge areas are selected (c) from a total six[36, knife-edge areas (b), as shown inclear Tableknife-edge 5. Experimental data show thatthe thepoint center 320] of the region is the area. Therefore, with ascoordinates the center, a[36, of six knife-edge areas (b), as most shown in Table 5. Experimental data show that the center coordinates 320] of the region is the most clear knife-edge area. Therefore, with the point as the center, a square square is of drawn with a is radius of 7 pixels to select thearea. area,Therefore, which is shown in (d). Given these data,a is [36, 320] the region the most clear knife-edge with the point as the center, drawn with a radius of a7 radius pixels to select the area, which is shown in (d). Given these data, the PSF is the PSF calculated subsequent reconstruction experiments. square isisdrawn withfor of 7 pixels to select the area, which is shown in (d). Given these data, calculated for subsequent reconstruction experiments. the PSF is calculated for subsequent reconstruction experiments.

(a)

(b)

(c)

(d)

(b) by using the novel slant knife-edge (c) (d) Figure 13.(a) Process of extracting the edge region method. (a) Edge detection by canny algorithm; (b) all knife-edge areas are described in white boxes; (c) preferred knifeFigure 13. Processofofextracting extractingthe theedge edge region by using the novel slant knife-edge method. (a) Edge Figure Process edge 13. areas; (d) selected knife-edge area. region by using the novel slant knife-edge method. (a) Edge detection by canny algorithm; (b) all knife-edge areas are described in white boxes; (c) preferred knifedetection by canny algorithm; (b) all knife-edge areas are described in white boxes; (c) preferred edge areas; (d) selected knife-edge area. knife-edge areas; (d) selected Table 5.knife-edge List of the area. center coordinates of knife-edge areas. Table 5. List of the center [X, Y]coordinates of knife-edge Relevant areas. Number Table 5. List of the center coordinates of knife-edge areas. 1 [36, 320] 0.9825 [X, Y] Relevant Number 2 [249, 207] 0.9763 1 [36, 320] 0.9825 [X, Y] Relevant 3 Number [144, 146] 0.9748 2 [249, 207] 0.9763 4 [180, 157] 0.9722 1 [36, 320] 0.9825 3 [144, 146] 0.9748 5 [261, 338] 0.9557 2 [249, 207] 0.9763 4 [180, 157] 0.9722 3 0.9748 6 [306, [144, 28] 146] 0.9187 5 [261, 338] 0.9557 4 [180, 157] 0.9722 6 [306, 28] 0.9187 5 [261, 338] 0.9557

Subsequently, the two selected PSFs are taken into the POCS algorithm to obtain the 6 [306, 28] 0.9187 reconstructed image.the Thetwo nominal values of are metric three imagestoasobtain shownthe in Subsequently, selected PSFs taken ofinto thereconstructed POCS algorithm reconstructed image. The nominal values of metric of three reconstructed images as shown in

Sensors 2017, 17, 362

16 of 19

Subsequently, the two selected PSFs are taken into the POCS algorithm to obtain the reconstructed image. The nominal values of metric Q of three reconstructed images as shown in Table 6. Metric Q of 2017, 17, 362on the proposed algorithm are the maximum number, it shows that the algorithm 16 of 19 theSensors image based has a good visual performance and detail preservation. What’s more, Figure 14 shows the results and Table 6. Metric of the image based on the proposed algorithm are the maximum number, it shows details of the HR image reconstruction. In region A, the edges of the building are clear and distinct that the algorithm has a good visual performance and detail preservation. What’s more, Figure 14 in (e) and (f). The bicubic interpolation method can only interpolate, but the interpolation is unclear. shows the results and details of the HR image reconstruction. In region A, the edges of the building Moreover, the images by the blind SR reconstruction jagged are clear and distinct reconstructed in (e) and (f). The bicubic interpolation method method can only present interpolate, but edges. the Therefore, the proposed method and the blind SR reconstruction method can effectively reconstruct interpolation is unclear. Moreover, the images reconstructed by the blind SR reconstruction method thepresent detailsjagged of theedges. building. In summary, the results indicate that all these algorithms can effectively Therefore, the proposed method and the blind SR reconstruction method can produce robust SR images. However, the proposed method demonstrated better effect effectively reconstruct the details of the building. In summary, the results indicate that allthan thesethe other algorithms. algorithms can effectively produce robust SR images. However, the proposed method demonstrated better effect than the other algorithms. Table 6. The nominal values of metric Q of three reconstructed images in the first set of experiments. Table 6. The nominal values of metric of three reconstructed images in the first set of experiments. Image Reconstructed by Different Algorithms The Value of Q Image Reconstructed by Different Algorithms The Value of Q the the proposed algorithm 24.2285 proposed algorithm 24.2285 the blind SR reconstruction algorithm 21.6771 the blind SR reconstruction algorithm 21.6771 the bicubic interpolation algorithm 17.3591 the bicubic interpolation algorithm 17.3591

(a)

A

A

(b)

(c)

(d)

(e)

(f)

(g)

A

Figure 14. Actual results of the experiments using the three methods. The images enclosed in a red Figure 14. Actual results of the experiments using the three methods. The images enclosed in a red box are the details of the three different algorithms. (a) Original LR image; (b) image based on the box are the details of the three different algorithms. (a) Original LR image; (b) image based on the proposed algorithm; (c) image based on the blind SR reconstruction algorithm; (d) image based on proposed algorithm; (c) image based on the blind SR reconstruction algorithm; (d) image based on the bicubic interpolation algorithm; (e) is the detail of reconstruction in region A by the proposed the bicubic interpolation algorithm; (e) is the detail of reconstruction in region A by the proposed algorithm; (f) is the detail of reconstruction in region A by the blind SR reconstruction algorithm; and algorithm; (f) is the detail of reconstruction in region A by the blind SR reconstruction algorithm; (g) is the detail of reconstruction in region A by the bicubic interpolation algorithm. and (g) is the detail of reconstruction in region A by the bicubic interpolation algorithm.

In the second set of experiments, we choose the most clear knife-edge area [114, 329] in real In the second set of experiments, we choose the mostThe clear knife-edge area 329] in real images according to the previous experimental procedure. UAV images, as well[114, as the results and details of the reconstruction, areprocedure. shown in Figure 15 and the values areand images according toHR theimage previous experimental The UAV images, as wellofasmetric the results shown Table The reconstruction results of the in proposed are values the most the image details ofin the HR 7. image reconstruction, are shown Figure method 15 and the of natural; metric Q are shown clearer with the increase in the theproposed amount of imageare information. Although blind SR in becomes Table 7. The reconstruction results of method the most natural; the the image becomes algorithm achieve aincertain reconstruction the edgeAlthough of the reconstructed is notcan clearer with can the increase the amount of imageeffect, information. the blind SRimage algorithm sharper thereconstruction image obtainedeffect, using the method. achieve a than certain theproposed edge of the reconstructed image is not sharper than the image obtained using the proposed method.

Sensors 2017, 17, 362

17 of 19

Sensors 2017, 17, 362

17 of 19

A

A

B

(a)

A

B

B

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

Figure 15. Actual results of the second set of experiments using the three methods. The images

Figure 15. Actual results of the second set of experiments using the three methods. The images enclosed enclosed in a red box are the details of the three different algorithms. (a) Original LR image; (b) image in a redbased box are theproposed details of the three algorithms. LR image; (b)(d) image on the algorithm; (c) different image based on the blind(a) SR Original reconstruction algorithm; imagebased on the proposed image based on the(e)blind reconstruction algorithm; image based onalgorithm; the bicubic (c) interpolation algorithm; is theSR detail of reconstruction in region (d) A by the based proposedinterpolation algorithm; (f) is the detail of in region A by the blind SR reconstruction on the bicubic algorithm; (e)reconstruction is the detail of reconstruction in region A by the proposed algorithm; the detail reconstruction ininregion A by interpolation algorithm; (h)algorithm; is algorithm; (f) is (g) theis detail of of reconstruction region A the bybicubic the blind SR reconstruction thedetail detail of of reconstruction in region B byA theby proposed algorithm; (i) is the detail of reconstruction (g) is the reconstruction in region the bicubic interpolation algorithm; (h) is theindetail of region B by the blind SR reconstruction algorithm; and (j) is the detail of reconstruction in region B reconstruction in region B by the proposed algorithm; (i) is the detail of reconstruction in region B by by the bicubic interpolation algorithm. the blind SR reconstruction algorithm; and (j) is the detail of reconstruction in region B by the bicubic interpolation algorithm. Table 7. The nominal values of metric of three reconstructed images in the second set of experiments.

Table 7. The nominalImage values of metric Qby ofDifferent three reconstructed theValue second Reconstructed Algorithms images in The of Qset of experiments. the proposed algorithm the blind SR reconstruction algorithm Image Reconstructed by Different Algorithms the bicubic interpolation algorithm

4. Conclusions

the proposed algorithm the blind SR reconstruction algorithm the bicubic interpolation algorithm

40.1082 34.8491of Q The Value 28.3006

40.1082 34.8491 28.3006

In summary, the POCS method with the estimated PSF based on multiple LR images describes a number of key initiatives including the improvement of the accuracy of the PSF of LR images by a 4. Conclusions novel slant knife-edge method. The validity and reliability of the formula, which derives the the method images before downsampling, have been proven. The of the In relationship summary, between the POCS withand theafter estimated PSF based on multiple LRvalue images describes downsampling multiplied by the PSF of LR images is equal to the estimated PSF of the HR image. a number of key initiatives including the improvement of the accuracy of the PSF of LR images The formula can be applied in image restoration and SR reconstruction. The formula can also enhance by a novel slantinknife-edge The validity and reliability of PSF the was formula, which the the clarity agriculturalmethod. remote sensing images. Finally, the estimated combined withderives the relationship between the images before and after downsampling, have been proven. The value of the POCS method to improve the accuracy of the SR reconstruction process. Our experimental results show thatmultiplied the deducedby formula of of PSF accurateisand significant in the development of the downsampling the PSF LRis images equal to the estimated PSF of the HR image. restoration andapplied reconstruction processes. However, problems remainThe to beformula solved. The The formula can be in image restoration andsome SR reconstruction. canquality also enhance of the knife-edge area significantly influences the estimation accuracy of the PSF, leading to some the clarity in agricultural remote sensing images. Finally, the estimated PSF was combined with the errors.

POCS method to improve the accuracy of the SR reconstruction process. Our experimental results show that theAcknowledgments: deduced formula PSF issupported accuratebyand significant inResearch the development of the restoration and Thisof work was the Major State Basic Development Program of China (No. 2012CB719904 of 973 Program). reconstruction processes. However, some problems remain to be solved. The quality of the knife-edge area significantly influences the estimation accuracy of the PSF, leading to some errors.

Acknowledgments: This work was supported by the Major State Basic Research Development Program of China (No. 2012CB719904 of 973 Program). Author Contributions: Chong Fan and Chaoyun Wu conceived and designed the experiments; Chaoyun Wu performed the experiments; Chong Fan and Grand Li analyzed the data; Grand Li contributed analysis tools; Chaoyun Wu wrote the paper.

Sensors 2017, 17, 362

18 of 19

Conflicts of Interest: The authors declare no conflict of interest.

References 1.

2. 3.

4. 5.

6. 7. 8.

9.

10.

11.

12. 13. 14.

15.

16.

17. 18. 19.

Lu, Z.W.; Wu, C.D.; Chen, D.; Qi, Y.; Wei, C. Overview on image super resolution reconstruction. In Proceedings of the 26th Chinese Control and Decision Conference (2014 CCDC), Changsha, China, 31 May–2 June 2014; IEEE: New York, NY, USA, 2014; pp. 2009–2014. Park, S.C.; Park, M.K.; Kang, M.G. Super-Resolution Image Reconstruction: A Technical Overview. IEEE Trans. Signal Process. Mag. 2003, 20, 21–36. [CrossRef] Kasturiwala, S.B.; Siddharth, A. Adaptive Image Superresolution for Agrobased Application. In Proceedings of the 2015 International Conference on Industrial Instrumentation and Control (ICIC) of Engineering Pune, Pune, India, 28–30 May 2015; pp. 650–655. Ling, F.; Foody, G.M.; Ge, Y.; Li, X.; Du, Y. An Iterative Interpolation Deconvolution Algorithm for Superresolution Land Cover Mapping. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7210–7222. [CrossRef] Duveiller, G.; Defourny, P.; Gerard, B. A Method to Determine the Appropriate Spatial Resolution Required for Monitoring Crop Growth in a given Agricultural Landscape. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008; pp. 562–565. Harris, J.L. Diffraction and resolving power. JOSA 1964, 54, 931–936. [CrossRef] Goodman, J.W. Introduction to Fourier Optics; Roberts and Company Publishers: New York, NY, USA, 2005. Sen, P.; Darabi, S. Compressive Image Super-resolution. In Proceedings of the 2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–4 November 2009. Hardie, R.C.; Barnard, K.J.; Armstrong, E.E. Armstrong, Joint MAP Registration and High-Resolution Image Estimation Using a Sequence of Undersampled Images. IEEE Trans. Image Process. 1997, 6, 1621–1632. [CrossRef] [PubMed] Wheeler, F.W.; Liu, X.; Tu, P.H. Multi-frame super-resolution for face recognition. In Proceedings of the First IEEE International Conference on Biometrics: Theory, Applications, and Systems, BTAS 2007, Crystal City, VA, USA, 27–29 September 2007; IEEE: New York, NY, USA, 2007; pp. 1–6. Irani, M.; Peleg, S. Image sequence enhancement using multiple motions analysis. In Proceedings of the 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, IL, USA, 15–18 June 1992; pp. 216–221. Irani, M.; Peleg, S. Super resolution from image sequences. In Proceedings of the 10th International Conference on Pattern Recognition, Atlantic City, NJ, USA, 16–21 June 1990; 1990; Volume 2, pp. 115–120. Stark, H.; Oskoui, P. High resolution image recovery from image plane arrays using convex projections. J. Opt. Soc. Am. 1989, 6, 1715–1726. [CrossRef] Alam, M.; Jamil, K. Maximum likelihood (ML) based localization algorithm for multi-static passive radar using range-only measurements. In Proceedings of the 2015 IEEE Radar Conference, Sandton, Johannesburg, South Africa, 27–30 October 2015; IEEE: New York, NY, USA, 2015; pp. 180–184. Hardie, R.C.; Barnard, K.J.; Armstrong, E.E. Joint MAP registration and high-resolution image estimation using a sequence of undersampled images. IEEE Trans. Image Process. 1997, 6, 1621–1633. [CrossRef] [PubMed] Tom, B.C.; Katsaggelos, A.K. Reconstruction of a high-resolution image by simultaneous registration, restoration, and interpolation of low-resolution images. In Proceedings of the 2nd IEEE Conference on Image Processing, Los Alamitos, CA, USA, 23–26 October 1995; pp. 539–542. Elad, M.; Feuer, A. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. IEEE Trans. Image Process. 1997, 6, 1646–1658. [CrossRef] [PubMed] Ogawa, T.; Haseyama, M. Missing intensity interpolation using a kernel PCA-based POCS algorithm and its applications. IEEE Trans. Image Process. 2011, 20, 417–432. [CrossRef] [PubMed] Xi, H.; Xiao, C.; Bian, C. Edge halo reduction for projections onto convex sets super resolution image reconstruction. In Proceedings of the 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA), Fremantle, Australia, 3–5 December 2012; IEEE: New York, NY, USA, 2012; pp. 1–7.

Sensors 2017, 17, 362

20. 21. 22.

23.

24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34.

35. 36.

37.

38. 39. 40.

19 of 19

Liang, F.; Xu, Y.; Zhang, M.; Zhang, L. A POCS Algorithm Based on Text Features for the Reconstruction of Document Images at Super-Resolution. Symmetry 2016, 8, 102. [CrossRef] Cristóbal, G. Multiframe blind deconvolution coupled with frame registration and resolution enhancement. In Blind Image Deconvolution Theory and Applications; CRC press: Boca Raton, FL, USA, 2007; Volume 3, p. 317. Nakazawa, S.; Iwasaki, A. Blind super-resolution considering a point spread function of a pushbroom sattelite imaging system. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium-IGARSS, Melbourne, Australia, 21–26 July 2013; pp. 4443–4446. Xie, W.; Zhang, F.; Chen, H.; Qin, Q. Blind super-resolution image reconstruction based on POCS model. In Proceedings of the 2009 international conference on measuring technology and mechatronics automation, Washington, DC, USA, 11–12 April 2009; pp. 437–440. Schulz, R.R.; Stevenson, R.L. Extraction of high-resolutionframes from video sequences. IEEE Trans. Image Process. 1996, 5, 996–1011. [CrossRef] [PubMed] Campisi, P.; Egiazarian, K. Blind Image Deconvolution: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2016. Fan, C. Super-resolution reconstruction of three-line-scanner images. Ph.D. Thesis, Central South University, Changsha, China, 2007. Ding, Z.H.; Guo, H.M.; Gao, X.M.; Lan, J.H.; Weng, X.Y.; Man, Z.S.; Zhuang, S.L. The blind restoration of Gaussian blurred image. Opt. Instrum. 2011, 33, 334–338. Capel, D. Image Mosaicing. In Image Mosaicing and Super-Resolution; Springer: London, UK, 2004; pp. 47–79. Reichenbach, S.E.; Park, S.K.; Narayanswamy, R. Characterizing digital image acquisition devices. Opt. Eng. 1991, 30, 170–177. [CrossRef] Yang, J.C.; Wang, Z.W.; Lin, Z.; Cohen, S.; Huang, T. Coupled Dictionary Training for Image Super-Resolution. IEEE Trans. Image Process. 2012, 21, 3467–3478. [CrossRef] [PubMed] Liu, Z.J.; Wang, C.Y.; Luo, C.F. Estimation of CBERS-1 Point Spread Function and image restoration. J. Remote Sens. 2004, 8, 234–238. Choi, T. IKONOS Satellite on Orbit Modulation Transfer Function (MTF) Measurement Using Edge and Pulse Method; Engineering South Dakota State University: Brookings, SD, USA, 2002; p. 19. Deshpande, A.M.; Patnaik, S. Singles image motion deblurring: An accurate PSF estimation and ringing reduction. Int. J. Light Electron Opt. 2014, 125, 3612–3618. [CrossRef] Leger, D.; Duffaut, J.; Robinet, F. MTF measurement using spotlight. In Proceedings of the IGARSS’94, Surface and Atmospheric Remote Sensing: Technologies, Data Analysis and Interpretation, International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 8–12 August 1994; IEEE: New York, NY, USA, 1994; Volume 4, pp. 2010–2012. Fan, C.; Li, G.; Tao, C. Slant edge method for point spread function estimation. Opt. Soc. Am. 2015, 54, 1–5. [CrossRef] Wang, Q.J.; Xu, Y.J.; Yuan, Q.Q.; Wang, R.B.; Shen, H.F.; Wang, Y. Restoration of CBERS-02B Remote Sensing Image Based on Knife-edge PSF Estimation and Regularization Reconstruction Model. In Proceedings of the 2011 International Conference on Intelligent Computation Technology and Automation (ICICTA), Shenzhen, China, 28–29 March 2011; pp. 687–690. Qin, R.; Gong, J.; Fan, C. Multi-frame image super-resolution based on knife-edges. In Proceedings of the IEEE 10th International Conference on Signal Processing Proceedings, Beijing, China, 24–28 October 2010; IEEE: New York, NY, USA, 2010; pp. 972–975. Qin, R.J.; Gong, J.Y. A robust method of calculating point spread function from knife-edge without angular constraint in remote sensing images. Yaogan Xuebao J. Remote Sens. 2011, 15, 895–907. Van Ouwerkerk, J.D. Image super-resolution survey. Image Vision Comput. 2006, 24, 1039–1052. [CrossRef] Zhu, X.; Milanfar, P. Automatic parameter selection for denoising algorithms using a no-reference measure of image content. IEEE Trans. Image Process. 2010, 19, 3116–3132. [PubMed] © 2017 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images.

To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex s...
11MB Sizes 1 Downloads 11 Views