For infrared imaging systems with high sampling width applying to the traditional display device or realtime processing system with 8-bit data width, this paper presents a new high dynamic range compression and detail enhancement (DRCDDE) algorithm for infrared images. First, a bilateral filter is adopted to separate the original image into two parts: the base component that contains large-scale signal variations, and the detail component that contains high-frequency information. Then, the operator model for DRC with local-contrast preservation is established, along with a new proposed nonlinear intensity transfer function (ITF) to implement adaptive DRC of the base component. For the detail component, depending on the local statistical characteristics, we set up suitable intensity level extension criteria to enhance the low-contrast details and suppress noise. Finally, the results of the two components are recombined with a weighted coefficient. Experiment results by real infrared data, and quantitative comparison with other well-established methods, show the better performance of the proposed algorithm. Furthermore, the technique could effectively project a dim target while suppressing noise, which is beneficial to image display and target detection. © 2014 Optical Society of America OCIS codes: (100.0100) Image processing; (100.2980) Image enhancement; (110.0110) Imaging systems; (110.3080) Infrared imaging. http://dx.doi.org/10.1364/AO.53.006013

1. Introduction

High-performance infrared imaging cameras adopt 14-bit width or a larger analog-to-digital converter to sample the detection signal, which improves the large dynamic range quantizing precision for an infrared scene, and the low-contrast details detection sensitivity. But traditional display devices or monitors support only 8-bit data width, even for some image processors, taking into account the real-time processing and other factors, and also need to limit the dynamic range to 256 gray scales. In addition, due to the limitations of the human visual system, observers can only identify 128 levels of gray (7 bits) in images [1], and cannot perceive the texture generated by a small temperature difference in the high dynamic range (HDR) image. Therefore, a reasonable compression technique is 1559-128X/14/266013-17$15.00/0 © 2014 Optical Society of America

required to map the raw data of high bit-width to the 8-bit width, simultaneously preserving the lowcontrast details as much as possible. This paper addresses the technique of detail enhancement while compressing the HDR for an infrared image, which is different from the general simple 8-bit image enhancement. Among the conventional dynamic range compression (DRC) and contrast enhancement algorithms, automatic gain control (AGC) and histogram equalization (HE) are the most widely used methods [2,3]. As a linear mapping, AGC uses the same compression ratio for the whole image, which results in the texture of small temperature difference merging into adjacent intensity levels and losing detail, while the large structures are compressed to reasonably display—or the former adjusts to be able to display but the latter is saturated or compressed to zero on 8-bit data. HE is a simple and effective method of nonlinear mapping, where the intensity levels 10 September 2014 / Vol. 53, No. 26 / APPLIED OPTICS

6013

occupying minor pixels are merged and yet the ones occupying major pixels increase the levels’ distance after equalization. However, HE usually suffers from details lost, noise overenhancement, and stiff visual effect. To overcome these drawbacks there are many improved algorithms, such as adaptive histogram equalization (AHE), and contrast limited adaptive histogram equalization (CLAHE) [4,5], based on local histogram. CLAHE limits the slope of the equalization function, with more flexibility in choosing the local mapping function and suppressing noise, and also effectively enhancing large homogeneous area contrast [6]. Methods based on the platform histogram are platform histogram equalization (PHE) [7], adaptive platform histogram equalization (APHE) [8], and adaptive dual platform histogram equalization (ADPHE) [9]. The histogram extremes can be limited with platform threshold, which controls the contrast enhancement, achieving better detail preservation and noise suppression. In conclusion, relying simply on histogram information, HE-based techniques and other variants lack the flexibility to process signals of small temperature difference in an HDR image, especially for the indiscriminate treatment of signal and noise. Besides, they cannot solve fundamentally the problem of simultaneous HDR compression and low-contrast detail expansion. When a large homogeneous area exists in the image, it results in blocky visual effect and noise overenhancement. Recently, many advanced methods have been presented in the literature. Fattal et al. [10] proposed a gradient domain HDR compression technique. By attenuating the magnitudes of large gradients, a low dynamic range (LDR) image was obtained by solving a Poisson equation. This method is capable of preserving the details and avoiding the introduction of halo artifacts effectively. Aiming at application in digital video camera, Monobe [11] developed a localcontrast range transform (LCRT) operator, based on the assumption that, if the local contrast (LC) at any intensity level does not change, the visual contrast is also preserved. The author used an approximate knee curve as a tone-mapping function to preserve the highlight contrast without lowering the shadow to middle range luminance. But the knee curve only compresses the highlight range over the knee, which could not get perfect enhancement capabilities over the entire dynamic range. A general form of simultaneous DRC and local-contrast enhancement algorithm (SDRCLCE) was derived by Tsai [12], which is compatible with many intensity transfer functions (ITFs). By combining the hyperbolic tangent mapping function, it provides the capability to adjustably control the overall lightness and contrast of the output image. But the expression of SDRCLCE is too complicated, especially for calculating the first-order derivative of the mapping function, which causes poor practicability of the algorithm. Moreover, the parameters determining the curvature change of the ITF need to be set manually. Thus, the author improved a fast format called the FDRCLEP technique 6014

APPLIED OPTICS / Vol. 53, No. 26 / 10 September 2014

for real-time processing [13]. The method adopts a linear approximated ratio function only associated with the ITF and the local mean value, which improves the real-time performance evidently. Even using complex ITF, it also reduces the computational load effectively. However, due to the approximate linearization proportionality factor and greatly depending on the local image mean, the edges will be overenhanced, and the capability to eliminate noise interference is poor. According to the Weber curve, Liu et al. designed a local adaptive gamma correction (LAGC) technique [14]. Different from traditional gamma correction, LAGC has the ability to update adaptively the gamma value with respect to the local regions. Specifically, the light region above the threshold is compressed, and the dark region below the threshold is extended. Meanwhile, the median of the HDR gray scale is used as the threshold of the gamma value, the separation of dark and light regions appears unnatural, and, moreover, the result of detail enhancement is not satisfying. As a nonlinear smoothing technique while respecting strong edges, the bilateral filter (BF) was first introduced by Tomasi and Manduchi in 1998 [15], and has been widely used for denoising, tone management, and so on [16]. Bilateral filtering combines the spatial geometric closeness and photometric similarity of neighboring pixels, which makes the pixel value in the local neighborhood be the weighting factor, so as to protect high-frequency information. This method is easy to implement because it depends only on two parameters that indicate the size and contrast of the features to preserve. Some fast versions of BF [17,18] provide further the real-time processing on large images. Moreover, multiscale decomposition of the original image avoids introducing halo artifacts. Based on those advantages, Branchitta et al. described a new idea for HDR compression and detail enhancement of an infrared image [19]. This method separates the original image into a base component and a detail component by BF, through which the two parts are either compressed or expanded independently. The output image is finally obtained by amalgamation of the two components. In order to eliminate the gradient reversal artifacts around strong edges, Zuo et al. [20] used an adaptive Gaussian filter to smooth the bilateral filtering results, whose standard deviation was determined according to the statistical characteristics of the local image. But all these methods do not take into account either the problem of noise overenhancement or the solution of an effective DRC technique. 2. Proposed Dynamic Range Compression and Detail Enhancement Algorithm

Based on the separate processing of large scale and small scale within an HDR image, this paper proposes a novel algorithm called dynamic range compression and detail enhancement (DRCDDE). Aiming at the engineering application of high-performance thermal

cameras, it performs the compression of a 14-bit HDR infrared image to an 8-bit LDR image for display. A.

Principle of the Proposed DRCDDE

While the thermal imager stares at the actual infrared scene, the gray value of each pixel will show different variation curves in the process of time. That is, different types of pixels in an infrared scene have their individual characteristics in the time domain and the frequency domain. These features are the basis for image detail separation. The real infrared scene containing a moving target is shown in Fig. 1, and time-frequency domain analysis of several typical pixels in the sequence is also provided. In Fig. 1, a and b are the real target and the edge of cloud, while c and d are the homogenous sky background and ground clutter, respectively. From the characteristic curves in the time domain of the four kinds of pixels, as illustrated in Fig. 2(a), it can be seen that the pixel gray value in the sky and clutter background fluctuates slightly along a constant, the cloud edge pixel presents a larger fluctuation because of the cloud slowly moving, and the real target pixel shows a significant spike in the time domain curve. We show the variation feature of these pixels in the frequency domain for further analysis, which

is shown in Fig. 2(b). The following conclusions can be drawn: the frequency spectrum of the pixel in the sky and clutter background has a uniform distribution from low to high frequency, which is mainly produced by noise; compared with the cloud edge pixel, the frequency spectrum of the real target has a larger amplitude and bandwidth in the midfrequency range; and besides a greater energy in mid-frequency, the spectral amplitude at both low and high frequency for the real target is still obvious. According to the preceding analysis, a traditional linear spatial filter cannot effectively distinguish the various changes in large dynamic range. We need a separation method for detail and base components that is capable of following rapid variations of the intensity levels and avoiding the introduction of halo artifacts, flexibly controlling the detail scale in the process without affecting strong edges, and effectively protecting the high-frequency information. BF as a weighted Gaussian smoothing filter [15] can meet the demand. The proposed DRCDDE algorithm is illustrated in Fig. 3. Using a BF, the HDR image was separated into a base component including large-scale variations and high-contrast textures, and a detail component including small-scale variations and low-contrast textures. Then, the two were processed independently. The base performed HDR compression with local-contrast preservation by a new adaptive ITF. The detail was enhanced by the new designed discrimination criterion between small-scale texture and noise. The DRCDDE output is finally obtained by recombining and rearranging the two components. The core idea of BF is to replace each pixel by a weighted average of its neighborhood. The weight depends on both the spatial distance and the gray value difference in the neighbors. BF is defined by X 1 G x − xl ; y − yl f BF x; y Wx; y x ;y ∈S σs l

100

50

←middle

800

600

400

40

60

80

100

120

frame number n

140

160

180

200

0

high ↓

low ↓

200

20

target edge ground sky

1000

amplitude |X(f)|

intensity x(n)

150

0

1200

target edge ground sky

200

x;y

· Gσ r f in x; y − f in xl ; yl • f in xl ; yl : (1)

Fig. 1. Four kinds of typical pixels in the real infrared scene.

250

l

0

10

20

30

40

50

frequency f(Hz)

Fig. 2. Characteristic curves for different kinds of pixels in the scene sequence: (a) the gray value characteristic in time domain and (b) the frequency spectrum of discrete Fourier transform. 10 September 2014 / Vol. 53, No. 26 / APPLIED OPTICS

6015

HDR IR Input Image

Adaptive Median

Adaptive Transfer Function-Based Compression

Bilateral Filter

+

+

Detail enhancement scheme

combination and normalization

8Bits DRCDDE Output Image

DRCDDE algorithm

Fig. 3. Framework of the proposed DRCDDE algorithm.

As a normalization factor, Wx; y is described as follows: X Gσ s x − xl ; y − yl Wx; y xl ;yl ∈Sx;y

· Gσr f in x; y − f in xl ; yl ;

(2)

where f in x; y denotes the original input image, f BF x; y is the result of bilateral filtering, and Sx;y represents the neighborhood of pixel x; y. The spatial Gaussian low-pass filter Gσ s weights the nearby pixels by spatial distance whose standard deviation is σ s , and at the same time the range Gaussian filter weights by gray values the difference whose standard deviation is σ r. Bilateral filtering is controlled by two parameters: the range parameter σ s determines image fuzziness, increasing as the smoothing features grow larger, and the spatial parameter σ r determines the minimum amplitude of variation, decreasing along with the litter of the removed small-scale component. By a combined weight of spatial Gaussian and range Gaussian, only the nearby and similar pixels contribute to the smoothing of the central pixel. In order to prevent the detector circuit noise from leaking into the detail image for further enhancement, this work preprocesses the original image through an adaptive median filter before BF: f M x; y Mf in x; y;

(3)

where MI denotes operation of I with adaptive median filtering. The preprocessing eliminates saltand-pepper or impulse noise, and also smoothes nonimpulse noise in the original image. The mollified image decreases the distortion that thickens or thins edges, which reduces the influence of bad pixels and noise in the enhanced result. Then we obtain the base image of the HDR data denoted by f b x; y, and further the detail image as follows: f d x; y f M x; y − f b x; y:

(4)

As an example, result of each step by applying DRCDDE technique to Fig. 1 is illustrated in Fig. 4. When the base component and detail component have been separated from the HDR infrared image, they will be processed independently. The specific discussion will be presented in Sections 2.B and 2.C, respectively. The processed base component and detail component are denoted by ghdr x; y and 6016

APPLIED OPTICS / Vol. 53, No. 26 / 10 September 2014

gdde x; y, which are recombined and quantized to obtain the 8-bit output image of DRCDDE. To take full advantage of the entire gray scale, the active intensity levels are extended linearly before the fusion: Gx; y

gx; y − mingx; y : maxgx; y − mingx; y

(5)

With Eq. (5), the extended results Ghdr x; y and Gdde x; y corresponding to ghdr x; y and gdde x; y, respectively, are obtained. The final dynamic range partition and fusion formula is described by GDRCDDE x; y 1 − p · Ghdr x; y p · Gdde x; y; (6) where pp < 0 < 1 as a partition coefficient controls the number of digital levels assigned to the base and detail component. In the M bits output (M 8), the gray levels allocated for the detail and base are p · 2M and 1 − p · 2M , respectively. B. High Dynamic Range Compression with Local-Contrast Preservation

The base image corresponds to the large-scale intensity variations of raw data, which requires mapping to 8-bit data and preserving local image contrast, in order to improve the image visual quality. Then maintaining LC while compressing becomes the key issue. Investigations find that the LCRT operator introduced by Monobe [11] automatically enhances LC and preserves it close to the original at any intensity level. Using this operator, the author obtained fine experimental results with HDR color images captured by digital video cameras. However, when we use a real infrared image to test, the results are not satisfying, which will be shown in the following sections. Moreover, a too-complicated form, and, in particular, a requirement for calculating the first-order derivative of the ITF, greatly limits an application of the operator. This work improves the derivation of the literature [11] and designs a fast local-contrast preserving operator (FLCP) applicable to infrared images, which accomplishes local-contrast preserving HDR compression concisely and effectively. So, we introduce the concept of Weber [21] contrast and give the mathematical condition of contrast preservation, which is described as follows: gout x; y − gavg x; y f in x; y − f avg x; y ; gavg x; y f avg x; y

(7)

where f in x; y denotes the original gray value and f avg x; y denotes the neighborhood average, while f avg x; y denotes the output gray value and gavg x; y denotes the neighborhood average of each pixel x; y. To facilitate the description, all the numerical operations are transformed to the logarithmic domain; for instance, F in x; y logf in x; y. Suppose T•∶R → R denotes any monotonically increasing and continuously differentiable ITF; we have γ T x; y TF in x; y:

(8)

Here, Ref. [11] obtained an approximate estimate of Gavg x; y through the first-order Taylor expansion of operation T• near F in x; y, based on the condition in Eq. (5). That is, TF avg x; y ≅ TF in x; y dTF in x; y F avg x; y − F in x; y: dF in x; y (9) The operator expression for LCRT was further defined as Gout x; y TF in x; y dTF in x; y F in x; y − F avg x; y: 1− dF in x;y (10) However, analysis finds that using first-order Taylor expansion to estimates of TF avg x; y leads to the introduction of derivation in the operator. As a consequence, it not only increases the computational load, but also strengthens the ITF constraints, which requires the continuous and differentiable properties. Although searching for such a transfer function is possible in theory, it limits the application flexibility. For a digital image with discrete gray scale, calculating the differential result dTF in x; y∕ dF in x; y is very difficult (for example, the image convolution result derivation), and it can only be approximately estimated in most cases. Therefore, we attempt to modify Eq. (9), substituting the differential coefficient for constant α:

TF avg x; y ≅ TF in x; y α · F avg x; y − F in x; y: (11) Although at a rough estimate, the accumulation of approximate error may affect the ultimate performance of the operator, for discrete image derivation, useful results can still be obtained. Moreover, computation of the operator is greatly simplified, so that the result is acceptable. Then the FLCP operator can be expressed as Gout x; y TF in x; y 1 − α · F in x; y − F avg x; y:

(12)

This equation is rewritten to the following in the gray domain: f in x; y 1−α : gout x; y Tf in x; y · f avg x; y

(13)

For flexible control of the contrast increasing, we have more simplification. Let β 1 − α and β ∈ 0; ∞; then β becomes a parameter controlling the image enhancement effect. The enhancement is weak with β < 1 but strong with β ≥ 1. The FLCP operator model designed in this paper is given as the final form: gout x; y Tf in x; y ·

f in x; y β : f avg x; y

(14)

The operator is derived according to Eq. (7), which fills the requirement of preserving LC. Compared with LCRT, it is applicable to any monotonically increasing ITF, with more flexibility. What is important is that it reduces the computational load greatly, and improves real-time performance. Based on the designed FLCP operator, an appropriate ITF is necessary to properly perform HDR compression of the base image. Taking into account the real data features acquired by the HDR detector, the proposed DRCDDE algorithm requires a DRC technique to obtain high-quality 8-bit output that is capable of the following:

Fig. 4. Results at different steps of DRCDDE for the infrared scene in Fig. 1: (a) base component, (b) detail component, and (c) fusion output. 10 September 2014 / Vol. 53, No. 26 / APPLIED OPTICS

6017

① Preserving high visual contrast over the entire gray scale, along with the detail feature in both light and dark regions. ② Low intensity levels of large structures transferring to the high end of the gray scale, so as to enhance the texture in the dark background. ③ Effectively preventing intensity levels from saturation in a wide range while enhancing the visual contrast in light regions. ④ Avoiding gray value reversal to provide a monotonically increasing mapping curve.

gray scale in the normalized histogram, the adaptive γ described in Eq. (15) cannot achieve satisfying enhancement results. This is because in the lower intensity level regions, using a natural exponent may produce a larger γ value even bigger than 1, which can be seen from Fig. 5 In this case, compressing the levels through gamma correction leads to a darker output. So, Eq. (15) is modified: γb

Considering these demands, we present a new nonlinear ITF that uses an adaptive compression ratio according to the local mean of each pixel. To our knowledge, gamma correction is a simple nonlinear intensity transfer, which implements the intensity level expansion (γ < 1) or compression (γ > 1) flexibly. Based on gamma correction, the proposed adaptive γ is described as f b x; y − μx; y γ b exp − ; μx; y

1 f x; y − μx; y exp − b ; k μx; y

where the normalized factor k is expressed by k

1 x;y−μx;y exp − minf bμx;y

hist ≤ th1 hist > th1 :

(15)

where f b x; y denotes the input base image, and μx; y denotes neighborhood averaging of pixel x; y. As shown in Figs. 5 and 6, the variation speed of the γ value depends on the local statistical mean μ. For a small mean, γ toboggans to a lower level and corresponds to the convex curve of large curvature in Fig. 5; as a result the low intensity levels will be enhanced greatly. Conversely, for a large mean, γ decreases slowly to a higher level and corresponds to the concave curve of small curvature; then the high intensity levels will be compressed greatly. It can also be noticed from Fig. 5 that, in the case of fixed local mean μl , the γ value decreases with the increasing of the pixel intensity value, which ensures that the mapping function using gamma correction is monotonically increasing while the input image is normalized into interval [0,1]. Using real data to test, we find that when there is a large concentration of pixels at the low end of the

gT Tf b x; y f b x; yγb :

3

1

µ=0.05 µ=0.1 µ=0.3 µ=0.5 µ=0.9

2.5

0.8 0.7

2

0.6

γ value

Output Level

(18)

Applying the aforementioned FLCP operator to the new designed ITF, and substituting Eq. (18) into Eq. (14), yields the formula of local-contrast preserving DRC,

0.9

0.5 0.4

γ=0.05 γ=0.1 γ=0.5 γ=1 γ=5 γ=10

0.3 0.2 0.1 0

0.2

0.4

0.6

0.8

Input Level

1.5

1

0.5

1

0

0

0.2

0.4

0.6

0.8

1

Input

Fig. 5. Gamma value for DRC: (a) conventional gamma correction curves and (b) adaptive gamma value for DRC. 6018

APPLIED OPTICS / Vol. 53, No. 26 / 10 September 2014

(17)

The variable hist is described as follows: take the intensity level rk that has maximum probability in the normalized histogram as the center and l as the width of the interval rk − l∕2; rk l∕2; then the probability sum of the intensity levels within that interval is defined as hist. Threshold th1 represents the pixel concentration in the interval. Normalized adaptive γ curves based on local image statistical characteristics are finally shown in Fig. 6. With the new designed adaptive γ value, our DRCDDE method is able to accomplish satisfying DRC for various infrared scenes, and effective image processing with a large concentration of pixels in the histogram. The adaptive ITF is given by

0

(16)

1

0.8 0.7

γ value

effect depends on only one parameter, β. Compared with other algorithms, the advantages of the proposed intensity level mapping curves are summarized as follows: different from [12,13], the ITF adjusts adaptively for each pixel based on the local statistical characteristics without any parameter; avoiding the threshold constraint, DRC with contrast preservation implements over the whole gray scale, and reduces the number of enhancement control parameters; and the expression calculation is simple and easy for engineering realization.

µ=0.05 µ=0.1 µ=0.3 µ=0.5 µ=0.9

0.9

0.6 0.5 0.4 0.3 0.2 0.1 0 0

0.2

0.4

0.6

0.8

C.

1

Input

Fig. 6. Normalized adaptive gamma value.

1

ghdr x; y f b x; yk exp

−

f b x; y β · ; (19) f avg x; y

f b x;y−μx;y μx;y

where f b x; y denotes the input base image, and β controls the degree of contrast enhancement. The local average f avg x; y is calculated by a convolution using the spatial Gaussian filter. Utilizing the above expression, the proposed DRCDDE approach provides desired large DRC with local-contrast preservation for the base image. Figure 7 illustrates the intensity level mapping curves with two fixed parameters. As is shown, the curvature of the designed mapping curve decreases with local mean μ increasing, which means that the dark region with low gray value will be enhanced obviously while the light region with high gray value basically preserves the original contrast. As a result, large numbers of pixels are prevented from saturation through the processing. In addition, comparing the two plots, we know that, as β increases, the slope of the mapping curve for pixels in the dark region is larger, leading to weak DRC. At the same time, the slope of the mapping curve for pixels in the light region is small, leading to more dynamic range preservation. That is to say, the overall enhancement 1

1

0.9

0.9

0.8

0.8

Output Level ghdr(x,y)

Output Level ghdr(x,y)

Detail Enhancement

The detail image extracted by BF corresponds to small-scale variations caused by a weak radiation difference. This component requires expansion so as to enhance details. Analyzing the infrared data actually acquired by the detector, we found that the heterogeneity correction of the detector is nonuniform, which usually results in a tiny gray value difference in the local neighborhood. Weighted by a combination of spatial distance and gray range in the bilateral filtering, they could possibly leak into the detail image together with the noise and the real target. The problem is particularly serious in images with flat background. Figure 8 illustrates the problem using 14-bit real infrared data. The HDR original image acquired by a mid-wavelength infrared (MWIR) thermal imager is shown in Fig. 8(a), where the dim target of the real airplane is marked with the rectangle. Figure 8(b) shows the residual image extracted by BF, whose standard deviations of the spatial Gaussian and the range Gaussian are, respectively, σ s 3 and σ r 0.5. The enhancement result of the detail image using uniform gamma correction value γ d 0.4 is given in Fig. 8(c). It is seen that a lot of residual background and random noise (for convenient description, the two go by the general name of noise points in the following sections) are enhanced together with the real target. This seriously affects the visual quality of the subsequent fusion result.

0.7 0.6 0.5 0.4 µ=0.1 µ=0.3 µ=0.5 µ=0.7 µ=0.9

0.3 0.2 0.1 0 0

0.2

0.4

0.6

Input Level fb(x,y)

0.8

0.7 0.6 0.5 0.4 µ=0.1 µ=0.3 µ=0.5 µ=0.7 µ=0.9

0.3 0.2 0.1

1

0 0

0.2

0.4

0.6

0.8

1

Input Level fb(x,y)

Fig. 7. Intensity level mapping curves: (a) β 0.25 and (b) β 1.25. 10 September 2014 / Vol. 53, No. 26 / APPLIED OPTICS

6019

Fig. 8. Example of detail enhancement comparison by flat HDR image: (a) AGC image, (b) detail image of BF, (c) expansion by gamma correction, and (d) expansion by the proposed method.

Therefore, we design the discriminant criterion and adaptive gray-level expansion formula, according to the local image statistical characteristics difference between noise points and the real dim target. First, we provide the concept of the dim target, which has two attributes: “weak” indicates a poor gray value contrast between the target and its surrounding background, and “small” implies a lack of shape and texture information—only a few of pixels in the image. This article defined the target that has signal-to-noise ratio (SNR) less than 4 and image size less than 6 × 6 as a dim target. In the actual infrared system, the nonideal optical imaging system and the diffraction effect of atmospheric transmission cause a diffusion imaging of target, which can be equivalent to a point source in the distance. The size of the target can be generally approximated by a 2D discrete Gaussian pulse [22]: 2 2

1 x y f T x; y τ · exp − ; 2 δx δy

(20)

where f T x; y is the gray value of the small target with the peak value of τ. x, y denote the distance from the target center, while δx , δy denote the size parameter in the horizontal and vertical directions, respectively. On the other hand, the noise and local gray residual present high or low gray value singularity 6020

APPLIED OPTICS / Vol. 53, No. 26 / 10 September 2014

between a single pixel and its surrounding background. Based on the local characteristics analysis of noise points and the small target, local rules for noise suppression can be set up. For the input detail image f d x; y, the global average μ and variance σ are calculated first. Taking any pixel f x;y as the center of the 3 × 3 neighborhood, the local average is μlx;y and the local variance is σ lx;y . Considering the large variance between different backgrounds in real infrared scenes, the concept of variance weighted entropy is presented to estimate the overall gray-level distribution: wenp −

r X

i − ¯i2 pi log pi;

(21)

i1

where the value of r is rjr ∈ N; 1 ≤ r ≤ 256, and pi is the probability of the pixel number at gray level i. The weighted entropy combined with the variance is able to not only depict the discrete degree of gray level, but also reflect the intensity of fluctuation, so as to decide whether the image is flat or not. Local rule 1: suppose the gray value of pixel f x;y is less than the threshold th2. (The global threshold is th2 μ a · σ, and a f1; 3g. When wenp < 1∕σ − μ, we choose a 3; else a 1.) If there are at least five pixels with gray values less than local

average μlx;y within its neighboring eight pixels, the pixel f x;y can be identified as the noise point and requires suppression. Otherwise, its property is unknown and needs to enhance. Local rule 2: suppose the gray value of pixel f x;y is not less than the threshold th2. If there are at least four pixels with gray values less than local average μlx;y within its neighboring eight pixels, the pixel f x;y can be identified as the noise point and requires suppression. Otherwise, the pixel goes to local rule 3 for further discrimination. Local rule 3: for the pixel f x;y that went into this rule, if its neighborhood meets μlx;y ∕σ lx;y > th3 (setting of threshold th3 is according to the global mean μ and variance σ ratio), the pixel can be considered as belonging to the flat area with high luminance. To prevent this area from overenhancement, its expansion is limited to a constant. Otherwise, the pixel implements the process of detail enhancement within the DRCDDE algorithm. Based on the above rules, the pixels representing residual background and random noise are identified and suppressed quickly. Remaining pixels f 0x;y that represent the details in the original HDR data accomplish adaptive intensity level expansion and enhancement still based on gamma correction in the proposed DRCDDE approach: gdde f d x; yγd :

(22)

Different from the aforementioned γ b value for the base image DRC, the key idea to set up γ d for the small intensity levels difference expansion of the detail image is as follows: ① the small targets or edges occupy a certain area and produce a relatively flat neighborhood, which causes large local mean μlx;y and a small local variance σ lx;y for the pixel f 0x;y . Therefore, the gray value expansion is in direct relation to μlx;y and in inverse relation to σ lx;y . ② When the overall distribution of intensity levels in the original image is flat or rarely fluctuant, BF smoothing retains a lot of noise points in the detail image

f d x; y. In this case, the real small target should be enhanced as strongly as possible. In summary, the expression for the adaptive γ d can be described as 8 l 2 < exp −Q · μx;y l σ x;y γd : exp −0.5 · μlx;y l σ x;y

1 wenp < σ−μ 1 wenp ≥ σ−μ

D.

Implementation Remarks: the Parameter Settings

The implementation performance of DRCDDE to a certain extent depends on the parameter settings; the most important of them are summarized as follows. ① Two standard deviations σ s and σ r in BF. In practice, the spatial domain filter Gσs uses a Gaussian low-pass filter and the intensity domain weighting Gσr uses a Gaussian function. Taking into account the computational cost, the standard deviation σ s cannot be too large. The range parameter σ r is chosen as [19] σr

maxf in x; y − minf in x; y ; Q

(24)

1 l

l

σ =.005

0.9

σ =.005

0.9

l

σ =.010 l

σ =.020

0.8

l

σ =.010 l

σ =.020

0.8

l

l

σ =.050

0.7 0.6 0.5

0.5 0.4

0.3

0.3

0.2

0.2

0.1

0.1 0.04

0.06 l

µ

0.08

0.1

l

σ =.100

0.6

0.4

0.02

σ =.050

0.7

l

σ =.100

γ value

γ value

(23)

whose value was plotted in Fig. 9. Introduction of the coefficient Q is by reason that the small local mean μlx;y of input f d x; y and the close to zero binomial result require adjusting the range of the γ d value (actual setting Q 20). This is illustrated in Fig. 10, which represents a HDR infrared image [see Fig. 10(a)] with abundant scene information. The extracted detail image [see Fig. 10(b)] contains a lot of edge ingredients, where the real airplane target is marked with the rectangle. The result of gamma correction given in Fig. 10(c) clearly shows that lots of noise points were enhanced together with the detail structures. The result by the proposed technique [see Fig. 10(d)] not only achieves edge contrast increase, but also enhances the real dim target while suppressing noise points effectively.

1

0 0

;

0 0

0.02

0.04

0.06

0.08

0.1

µl

Fig. 9. Gamma value for DDE: (a) mapping curves for the flat background and (b) mapping curves for the fluctuant background. 10 September 2014 / Vol. 53, No. 26 / APPLIED OPTICS

6021

Fig. 10. Example of detail enhancement comparison by fluctuant HDR image: (a) AGC image, (b) detail component of BF, (c) expansion by gamma correction, and (d) expansion by the proposed method.

where Q is in the range of 15 < Q < 20 in our experiments. ② Parameter β in HDR compression. The parameter β controls the contrast preserving in HDR compression. The value of β must make a

compromise between local-contrast enhancement and global visual effect. In detail, β < 1 means weaker enhancement and β ≥ 1 means stronger enhancement. For example, a value of β 0.25 is recommended for the simulated image shown in Fig. 11

Fig. 11. Experimental results of simulation drone (β 0.25, p 0.2): (a) original LDR image, (b) HE, (c) CLAHE, (d) BF&DRP, (e) SDRCLCE, and (f) proposed DRCDDE. 6022

APPLIED OPTICS / Vol. 53, No. 26 / 10 September 2014

component, meaning more edge information but a large area of dark background in the result. With value in the range 0.1 ≤ p ≤ 0.4, it can fit our requirement best.

because the contrast between the two sides of the edge is already obvious, and a choice of β 1.25 yields good local-contrast enhancement for the real images shown in Figs. 12–15. ③ Partition coefficient p in dynamic range assignment.

3. Experimental Results

The parameter p0 < p < 1 controls the partition of dynamic range assigned to the detail image and base image in the final output. The value of p must be chosen as a trade-off between detail enhancement and overall noise suppression. Increasing p will cause more obvious detail enhancement but less base

To validate the performance of DRCDDE, we experiment with HDR data that were acquired by the newer-generation MWIR thermal imager. Its main technical specifications are listed in Table 1. Figures 12–15 represent different infrared scenes, which contain the surface or sky background, and the small- or large-scale target, respectively. The

Fig. 12. Experimental results of dim target with homogeneous background (β 1.25, p 0.2): (a) AGC, (b) HE, (c) CLAHE, (d) BF&DRP, (e) SDRCLCE, and (f) the proposed DRCDDE. 10 September 2014 / Vol. 53, No. 26 / APPLIED OPTICS

6023

Fig. 13. Experimental results of dim target with complex background (β 1.25, p 0.1): (a) AGC, (b) HE, (c) CLAHE, (d) BF&DRP, (e) SDRCLCE, and (f) the proposed DRCDDE.

default parameter settings for experiments are as follows: threshold of pixel concentration th1 0.6, threshold of light area avoiding overenhancement th3 2.0, and BF standard deviation of spatial Gaussian σ s 3. The proposed algorithm is compared with several well-established methods including linear AGC, HE, CLAHE, BF&DRP [19], and SDRCLCE [12]. These five techniques are typical representatives of the most effective DRC and enhancement methods. A.

Visual Effect Comparison with Other Methods

The basic strategy of the DRCDDE algorithm is that of separating the low-contrast detail from the 6024

APPLIED OPTICS / Vol. 53, No. 26 / 10 September 2014

original HDR and then performing DRC or detail enhancement independently. To illustrate in theory, Fig. 11 simulates a frame of an 8-bit infrared image containing five theoretical drones. Each drone exists in a different temperature background, and there is just a one to two intensity levels difference from the background. At the beginning, as shown in Fig. 11(a) all drones are invisible in original image. Figure 11(b) gives the result of the HE technique, where only the center drone within the major dynamic range emerges, while the other four are still indiscernible within the minor dynamic range. Based on local HE, CLAHE is not able to distinguish such a small temperature difference, making all of the

Fig. 14. Experimental results of target with complex background (β 1.25, p 0.1): (a) AGC, (b) HE, (c) CLAHE, (d) BF&DRP, (e) SDRCLCE, and (f) the proposed DRCDDE.

drones unseen in Fig. 11(c). It can be seen in Fig. 11(e) that the SDRCLCE approach also suffers from failure. But the techniques based on BF separation, such as BF&DRP and the proposed algorithm, both successfully display all five drones. Comparing the enhancement result of Fig. 11(d) with Fig. 11(f), the DRCDDE technique yields better performance to improve the details’ visibility. Next, we use real 14-bit HDR infrared data to test. Figure 12 represents the infrared scene with a dim airplane in a sky background, which has weak background fluctuation and uniform distribution of intensity levels, while the small target has very

low contrast. The linear extension by AGC does not improve the perceptibility of the details, as can be observed in Fig. 12(a). Using HE, Fig. 12(b) achieves overall visual quality enhancement, but the perceptibility of the small details does not increase. CLAHE enhances local image contrast effectively, which can be easily seen in Fig. 12(c), but there is an obvious “blocky effect” resulting in quality deterioration of the output. The results of BF&DRP and SDRCLCE techniques, as illustrated in Figs. 12(d) and 12(e), cannot effectively enhance image details. The proposed DRCDDE algorithm yields satisfying results of HDR compression with 10 September 2014 / Vol. 53, No. 26 / APPLIED OPTICS

6025

Fig. 15. Experimental results of big target with cloud background (β 1.25, p 0.1): (a) original LDR image, (b) HE, (c) CLAHE, (d) BF&DRP, (e) SDRCLCE, and (f) the proposed DRCDDE.

local detail enhancement. As marked with the rectangle in Fig. 12(f), the LC of the airborne infrared target increases dramatically. Figure 13 illustrates a dim airplane infrared scene with complex surface background, which mainly consists of the forest region and the sky region, causing bimodal distribution of the histogram. AGC gives a dark appearance of Fig. 13(a), and the detail structures are not clear-cut enough. Through intensity level combination, HE enhances the image contrast evidently, but has a stiff visual effect as shown in Fig. 13(b); also the real target contrast is not improved. By the local histogram, the CLAHE tech6026

APPLIED OPTICS / Vol. 53, No. 26 / 10 September 2014

nique achieves a good detail enhancement result in Fig. 13(b), where the branches and point object are clearly visible. However, the “blocky effect” appears in the flat sky area again. The worst enhancement result is produced by SDRCLCE, which not only decreases the contrast of the real target but also gives the image a washed-out appearance in Fig. 13(e), because the intensity levels shift to the high end of the gray scale. The average intensity is significantly enhanced by the BF&DRP approach, while when we compare the local detail enhancement results such as the treetop, Fig. 13(d) is much worse than Fig. 13(f).

Table 1.

Technical Specifications of the Infrared Thermal Imager

Name Devices Spectral range Number of elements Field of view (FOV) Size of element F# NETD

Value Cooled MWIR thermal imager Mercury cadmium telluride (MCT) 640 × 512 w15.3 × 12.5°∕M4.7 × 3.8°∕ M1.2 × 0.95° 15 μm 4 25 mk

The previous two sets of images demonstrate that our new proposed approach obtains satisfying contrast enhancement and noise suppression, when processing the infrared scene with small and dim targets. Furthermore, the DRCDDE technique also yields satisfying enhancement results for larger details. Compared with the original image in Fig. 14(a), HE increases the overall luminance and image contrast significantly in Fig. 14(b), where the details in the dark region are visualized clearly. But it suffers from small structure saturation in the light region when the target is not clear. Both the CLAHE and BF&DRP methods accomplish local-contrast improvement. They represent the branch structures as shown in Figs. 14(c) and 14(d), but improve the target area poorly. The SDRCLCE technique yields similar visual appearance to our method, but comparing Fig. 14(e) with Fig. 14(f), the latter increases the detail of the tree better. In order to further test the effectiveness of this method, Fig. 15 chooses the infrared scene of a big airplane target with a certain shape and texture, which has a complex cloud background and a histogram characterized by a large concentration of pixels. After AGC processing, the steep histogram is broadened, which means more effective use of the gray scale. As a consequence it yields a better improved image in Fig. 15(a), whereas the pixels at high intensity values are saturated, leading to the airplane detail structure being invisible. Due to the combination of adjacent intensity levels, HE causes not only the loss of details but also the “blocky effect,” which can be clearly seen in Fig. 15(b). The testing results shown in Figs. 15(c) and 15(d) both realize local image contrast preservation, where the engines on the two sides are visualized. But the overall luminance is not enhanced evidently, giving a darker appearance and poor visibility. Comparing Fig. 15(e) with Fig. 15(f), we come to the conclusion that both the SDRCLCE technique and our approach achieve desired processing, which clearly represents the airplane contour. The latter also yields better contrast improvement between the dark area and the light area. Through the subjective visual quality evaluation for the five sets of images, we draw the conclusion that our algorithm is effective to process various kinds of infrared image data, objects, and scenes. Compared with existing methods, the DRCDDE

technique yields the best performance in terms of HDR compression, the perceptibility of details and dim target enhancement, and noise suppression. B. Quantitative Criteria Comparison

To perform objective evaluation of the algorithm, this section quantitatively compares the proposed algorithm with the other five methods. The paper set up a joint evaluation scheme with several criteria to make the comparison result more accurate. ① Linear index of fuzziness (fuzzy) [22]: this is based on spatial domain analysis, which is useful to quantitatively compare the enhancement performance of different methods. It is defined by φf

2 XX minfpmn ; 1 − pmn g; MN m n

(25)

and the fuzzy property is described as pmn sin

π f · 1 − mn ; 2 f max

(26)

where f xy and f max denote the gray value of the pixel m; n and the maximum gray value of an M × N input image. A smaller φf means better performance of image enhancement. ② Local SNR: this reflects the relative brightness between the small target and the clutter fluctuation in the local background. A larger SNR value means a better perceptibility of details. The SNR for a small target is defined as SNR

fT − fB : σ

(27)

Local SNR is explained by the tricyclic structure: the inner ring is the target area, with f T denoting the target intensity value; the middle ring is the transition area; and the outer ring is the background area, with f B denoting the clutter intensity value and σ denoting the variance. ③ Local contrast (LC): this describes the difference of the intensity value between the target and the surrounding background, which directly evaluates the image enhancement quality. LC adopts the following definition: Lc

fT − fB ; fB

(28)

where f T denotes the mean intensity value of the target area, and f B denotes the mean intensity value of the adjacent background area. ④ Detail variance (DV): this gives the variance of the total pixels in the detail area, whose definition is similar to the SNR. Variance statistically characterizes the overall pixel value deviation from the average, which reflects the intensity of the value variation. A larger DV value indicates greater 10 September 2014 / Vol. 53, No. 26 / APPLIED OPTICS

6027

Table 2.

Image Fig. 12

Fig. 13

Fig. 14

Fig. 15

Avg

Fuzzy SNR LC DV Fuzzy SNR LC DV Fuzzy SNR LC DV Fuzzy SNR LC DV Fuzzy SNR LC DV

Quantitative Comparison

AGC

HE

CLAHE

BF&DRP

SDRCLCE

DRCDDE

0.35 2.06 0.034 2.32 0.29 3.96 0.092 2.97 0.28 17.1 0.384 17.19 0.19 0.62 0.18 0 0.278 5.935 0.173 5.620

0.40 2.67 0.021 2.31 0.40 3.72 0.074 4.81 0.40 16.4 0.236 18.47 0.40 0.45 0.033 0.27 0.40 5. 810 0.091 6.465

0.42 4.77 0.096 7.9 0.31 5.56 0.219 8.64 0.29 10.48 0.630 18.18 0.14 0.92 0.299 19.54 0.290 5.433 0.311 13.565

0.46 3.12 0.025 3.42 0.38 6.0 0.09 5.70 0.34 17.26 0.326 17.93 0.05 0.98 0.411 26.09 0.308 6.840 0.213 13.285

0.49 1.05 0.004 0.44 0.47 1.57 0.022 0.80 0.47 15.51 0.2 34.64 0.28 0.91 0.251 25.80 0.428 4.760 0.1193 15.420

0.35 10.75 0.12 15.77 0.23 16.81 0.316 11.35 0.26 31.33 0.649 40.13 0.22 1.06 0.315 27.30 0.265 14.988 0.350 23.638

fluctuation in the detail area, which means more detail structures or more obvious contrast. Using the above objective evaluation criteria, the quantitative comparison results of the six methods for Figs. 12–15 are listed in Table 2, where the boldface values indicate the best performance of all the methods. From Table 2 we can draw the following conclusions: ① the DRCDDE technique achieves the largest SNR value that outclasses the others, which indicates that the proposed algorithm yields the best ability to enhance details and project a small target in DRC. ② According to the comparison results of fuzzy and LC, DRCDDE outperforms the other methods in terms of improving the detail richness and the LC. Although CLAHE has a bigger value than DRCDDE, taking into account all sets of images’ visual quality, our approach is obviously better than the former. ③ The DRCDDE algorithm also obtains the largest DV value, which has increased by three times the original image. This result validates that the proposed method implements DRC with local details or local texture contrast preservation most effectively. These methods have been tested in a Windows XP environment on a PC (CPU, Intel core i5 2.4 GHz; memory, 3 GB) to perform the calculation comparison. A group of real infrared images with size of 640 × 512 are used for each method, and the average calculation time with MATLAB 2010 is listed in Table 3. Comparison results indicate that the computational load of the proposed algorithm increases dramatically. Therefore, it is necessary to adopt

Table 3.

Time Comparison of Different Methods (s)

AGC

HE

CLAHE

BF&DRP

SDRCLCE

DRCDDE

0.075

0.067

0.106

8.822

0.754

17.381

6028

APPLIED OPTICS / Vol. 53, No. 26 / 10 September 2014

effective optimization strategies for real engineering applications. For the time profile of each step of the algorithm flow, we first adopt the rapid implementation of the most time-consuming bilateral filtering [17,18]. Porikli designed the most efficient constant time o1 BF at present; the processing time is under 0.3 s for a 1 MB gray-level image independent of the filter size. Using these fast methods, the computational load of bilateral filtering can be greatly reduced. Then, the data flow parallel processing model is established based on the framework in Fig. 3. The base compression and detail enhancement are segmented as two parallel subtasks with the largest granularity, building an optimal parallel processing mechanism. Finally, utilizing the highperformance parallel processing hardware designed with multicore DSP, real-time realization is possibly achieved. 4. Conclusion

In order to get the high data-width infrared thermal imager compatible with the low data-width devices for displaying and processing, this article has presented a new HDR compression and detail enhancement algorithm for infrared images, named DRCDDE. The core idea is to separate the HDR data into a large-scale component and a small-scale component, and then to compress or expand the dynamic range independently. Experimental results show that this idea is quite necessary and effective to implement HDR compression together with small temperature difference signal enhancement. For the extracted base image, based on the existing methods, the paper has set up a FLCP operator model according to the infrared image characteristics, and also using a new designed adaptive ITF to accomplish adaptive HDR data compression with contrast preservation. This approach not only

simplifies the computation of the model greatly, but also enables the compressed image to match with the original overall dynamic range, representing the details in both dark and light areas through the whole gray scale. Taking into account the high-frequency detail protection by bilateral filtering, which results in the high-frequency noise leaking into the detail image, this paper has designed an adaptive intensity level extension criterion to enhance the low-contrast details and suppress noise simultaneously. We adopt real infrared images and a set of quantitative evaluation criteria to demonstrate the advantage of DRCDDE. In summary, the proposed algorithm yields better performance that can implement HDR compression and low-contrast preservation in different temperature backgrounds for various infrared scenes, achieve excellent detail enhancement and noise suppression for infrared data of different dynamic ranges, and, in particular, project the small target of low SNR, which is highly advantageous to the practical application. The statistical processing of each pixel in its neighborhood increases the computational load, which requires further optimization and improvement. Although the established FLCP operator can flexibly control the contrast enhancement, it introduces white halo around the edges, which is particularly evident in the image with quite flat background in Fig. 7. We will focus on these drawbacks in future work. References 1. J. Silverman, “Display and enhancement of infrared images,” in International Conference on Image Processing and its Applications (1992), pp. 345–348. 2. Digital detail enhancement technical note [Online], http:// www.flir.com (2005). 3. R. C. Gonzalez and Q. Q. Ruan, Digital Image Processing, 2nd ed. (PHEI, 2007). 4. S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. H. Romeny, J. B. Zimmerman, and K. Zuiderveld, “Adaptive histogram equalization and its variations,” Comput. Vis. Graph. Image Process. 39, 355–368 (1987).

5. K. Zuiderveld, “Contrast limited adaptive histogram equalization,” in Graphics Gems IV (Academic, 1994), pp. 474–485. 6. F. Branchitta, M. Diani, and G. Corsini, “Dynamic-range compression and contrast enhancement in infrared imaging systems,” Opt. Eng. 47, 076401 (2008). 7. V. E. Vickers, “Plateau equalization algorithm for real-time display of high quality infrared imagery,” Opt. Eng. 35, 1921–1926 (1996). 8. B. J. Wang, “Research on enhancement algorithm for low SNR IR target images,” Ph.D. dissertation (Xi Dian University, 2005). 9. K. Liang, “A new adaptive contrast enhancement algorithm for infrared images based on double plateaus histogram equalization,” Infrared Phys. Technol. 55, 309–315 (2012). 10. R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range compression,” in Proceedings of the 29th Conference on Computer Graphics and Interactive Techniques (2002), pp. 249–256. 11. Y. Monobe, “Dynamic range compression preserving local image contrast for digital video camera,” IEEE Trans. Consum. Electron. 51, 1–10 (2005). 12. C. Y. Tsai, “A novel simultaneous dynamic range compression and local contrast enhancement algorithm for digital video cameras,” EURASIP J. Image Video Proc. 2011, 1–19 (2011). 13. C. Y. Tsai, “A fast dynamic range compression with LC preservation algorithm and its application to real-time video enhancement,” IEEE Trans. Multimedia. 14, 1140–1152 (2012). 14. B. Liu, X. Wang, W. Q. Jin, Y. Chen, C. L. Liu, and X. Liu, “Infrared image detail enhancement based on local adaptive gamma correction,” Chin. Opt. Lett. 10, 021002 (2012). 15. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in 6th International Conference On Computer Vision (1998), pp. 839–846. 16. S. Paris, P. Kornprobst, J. Tumblin, and F. Durand, “Bilateral filtering: theory and applications,” Found. Trends Comput. Graph. Vis. 4, 1–73 (2008). 17. B. Weiss, “Fast median and bilateral filtering,” ACM Trans. Graph. 25, 519–526 (2006). 18. F. Porikli, “Constant time o(1) bilateral filtering,” in IEEE Conference on Computer Vision and Pattern Recognition (2008). 19. F. Branchitta, M. Diani, and G. Corsini, “New technique for the visualization of high dynamic range infrared images,” Opt. Eng. 48, 096401 (2009). 20. C. Zuo, Q. Chen, N. Liu, J. L. Ren, and X. B. Sui, “Display and detail enhancement for high-dynamic-range infrared images,” Opt. Eng. 50, 127401 (2011). 21. E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7, 2032–2040 (1990). 22. R. Lai, Y. T. Yang, B. J. Wang, and H. X. Zhou, “A quantitative measure based infrared image enhancement algorithm using plateau histogram,” Opt. Commun. 283, 4283–4288 (2010).

10 September 2014 / Vol. 53, No. 26 / APPLIED OPTICS

6029