An improved image fusion algorithm
We are a big printing company in Shenzhen China . We offer all book publications, hardcover book printing, papercover book printing, hardcover notebook, sprial book printing, saddle stiching book printing, booklet printing,packaging box, calendars, all kinds of PVC, product brochures, notes, Children's book, stickers, all kinds of special paper color printing products, game cardand so on.
For more information please visit
http://www.joyful-printing.com. ENG only
http://www.joyful-printing.net
http://www.joyful-printing.org
email: info@joyful-printing.net
First. Introduction
Image fusion is the synthesis of image information obtained by two or more sensors to form a more accurate and richer image. Image fusion is divided into three levels from low to high: pixel level fusion, feature level fusion and decision level fusion. Wavelet transform is a commonly used method in image fusion. Research shows that wavelet transform is not the optimal function representation method in high dimensional cases. In order to more effectively represent and process high-dimensional spatial data such as images, researchers have proposed a series of multi-scale geometric analysis tools such as Ridgelet, Curvelet, Bandelet, Contourlet and Wavelet-Contourlet. The proposed methods lay the foundation for constructing more and more efficient fusion methods in the field of image fusion.
Wavelet transform and Contourlet transform are two main and commonly used tools for image fusion. Wavelet transform is mature in both theory and practical applications. It has always been a common tool for image fusion, but it can only obtain information in three directions: horizontal, vertical and diagonal. The Contourlet transform can compensate for the finiteness of the wavelet transform direction and obtain information in any direction of the image. At the same time, wavelet transform and Contourlet transform process the image in a similar way, that is, the image is decomposed into low frequency and high frequency parts, and image fusion is performed in low frequency and high frequency respectively. Therefore, wavelet transform and Contourlet transform can be combined to improve the performance of the image.
The research of image fusion not only includes the research of algorithms, but also the research of fusion rules. Fusion rules now generally select region (or windowing) fusion rules. Because the low frequency mainly reflects the energy information of the image, and the high frequency reflects the boundary information of the image, that is, the degree of change. The variance also reflects the overall change between image pixels. Therefore, in this paper, the fusion method of selecting the weighted average of the regional variance is selected in the high frequency part.
Second, the wavelet-Contourlet transform
1. Wavelet transform
In 1989, inspired by Burt and Adelson's image decomposition and reconstruction pyramid algorithm (the Gauss-Laplace pyramid algorithm), Mallat introduced multi-scale analysis ideas in the field of computer vision into wavelet analysis. The concept of resolution analysis is given by the Mallat fast algorithm.
The two-dimensional Mallat algorithm can be expressed as:
In the formula, the low-frequency components in the spatial resolution are respectively high-frequency components in the horizontal, vertical, and diagonal directions in spatial resolution. As a set of integers, it is a mirror conjugate filter.
2.Contourlet transform
Since the wavelet transform does not represent the direction information of the image well. In 2002, Do and Vetterli proposed the Contourlet transform. Contourlet transform is a multi-resolution, local and directional image representation method. It can provide information in any direction and is a sparse representation of a two-dimensional image. The Contourlet transform uses a two-channel filter bank that processes the image in two steps:
2.1 The Laplacian Pyramid (LP) is used to decompose the input image. The LP decomposition consists of four steps: low-pass filtering, down-sampling, interpolation amplification, and band-pass filtering. Continued LP decomposition of the low frequency subbands yields low frequency and high frequency subbands on a range of different scales.
2.2 The directionality analysis is performed on the high frequency directional filter bank obtained by the LP decomposition. The purpose of the directional filter bank is to capture the directional high frequency information of the image and combine the singular points in the same direction into one coefficient.
The directional filter bank decomposes the tree-level structure of the image, and decomposes the frequency domain into sub-bands, each of which is wedge-shaped. The method first uses the fan filter shown in Figure 1 and five sampling filters to decompose the input image into two sub-bands, horizontal and vertical, and then introduces the Shearing resampling operator band.
Figure 1 Filter bank (fan filter and five sampling filters)
Figure 1 represents five sampling matrices, and the black sector area represents the ideal frequency domain decomposition for each filter.
3. Wavelet-Contourlet transform
The LP decomposition used in the first stage of the Contourlet transform is redundant, resulting in a 4/3 redundancy of the Contourlet transform. In addition, the de-correlation of LP decomposition is also not as good as wavelet transform. In response to the above problems, Eslami R and Radha H proposed a wavelet-Contourlet transform. The wavelet-Contourlet transform is composed of a two-stage filter bank. The first stage is decomposed by wavelet transform to obtain the high-frequency components of the image, thereby reducing the correlation of the sub-space detail information. The second-stage uses the directional filter bank. Subbands in all directions of high frequency are obtained. As shown in Figure 2:
Third, based on variance selection average fusion algorithm
The steps of the fusion algorithm given in this paper are as follows:
1 Performing wavelet-Contourlet transform on the left blurred image and the right blurred image respectively, and obtaining the low frequency part and the high frequency part of the source image;
2 The low frequency part uses a simple weighted average fusion rule.
After the image is transformed by wavelet-Contourlet, the low-frequency coefficients mainly concentrate most of the energy of the original image, which determines the overview of the image. In this paper, we use the weighted average fusion rule for the low frequency subband coefficients.
3 The high frequency part adopts the fusion rule of variance selection weighted average.
After the image is transformed by wavelet-Contourlet, the high frequency coefficient mainly contains the details and edge information of the image. Each high frequency sub-band coefficient embodies the directional characteristics. In this paper, a fusion algorithm based on variance selection averaging is adopted. The fusion rules are as follows:
First, the decomposition layers of the high-frequency sub-images of the left-side blurred image and the right-side blurred image are obtained, and the local variances of the corresponding pixels in the respective directions are respectively recorded as: . See formula (2) for the formula:
Then normalize it:
Define a threshold: This article takes the simulation. Then there are:
Wherein, , respectively, represent pixel values of the fused image, image, and image in the decomposed layer and direction. It can be seen from equations (5) and (6) that when the local variance values of the normalized images A and B are large, it means that one image contains rich edge detail information, while the other image contains With less edge detail information, the pixel value with larger local variance is selected as the fused coefficient; when the two images are normalized, the local variance is close, indicating that both images contain rich edge detail information. The weighted average fusion algorithm is used to determine the wavelet coefficients after fusion. This preserves the detailed features of the image while avoiding information loss.
4 Perform a wavelet-Contourlet inverse transform to obtain the final fused image.
The specific process is shown in Figure 3.
Figure 3 Wavelet-Contourlet transform image fusion block diagram
Fourth, the integration of evaluation indicators and analysis of experimental results
In order to verify the effectiveness of the proposed algorithm, under the same fusion rules, it is compared with wavelet transform and Contourlet transform. The image is decomposed and reconstructed using db5 wavelet. The threshold is taken as 0.6.
In order to objectively evaluate the fused image, a clear grayscale image was selected as the reference image. The cross entropy, deviation and spatial frequency of the fused image and the reference image are used as objective criteria for evaluating the pros and cons of the fusion effect.
1. Cross entropy
The cross entropy of the fused image and the standard reference image is defined as:
In equation (7), the sum is the probability of the sum of the gray values. The total gray level of the image. Cross entropy, also known as relative entropy, is a key indicator for evaluating the difference between two images. The smaller the cross entropy, the smaller the difference between the description and the sum, the more information is extracted from it, that is, the effect of merging the image is better.
2. Deviation
The deviation of and is defined as:
In the formula (8), the gradation values of the sum are respectively indicated. The magnitude of the deviation represents the difference from the gray value. The smaller the deviation, the better the fusion effect.
3. Spatial frequency
The spatial frequency of the image is:
(9) is the line frequency of the image, and its calculation formula is shown in (10):
(9) is the column frequency of the image, and its calculation formula is as shown in (11):
The above formula represents the gray value at the pixel position, and represents the number of lines and the number of columns of the image, respectively.
The spatial frequency of an image is an indicator related to the variance, reflecting the overall activity of an image spatial domain. The larger the spatial frequency, the more active the image, the clearer the image, and the better the image blending effect.
Figure 4 shows the results of the fusion experiment. Figure 4 (a) is the standard reference image, Figure 4 (b) is the image to be fused by the Gaussian filter on the left side, and Figure 4 (c) is the image to be fused by the Gaussian filter on the right side. Figure 4 (d) is The fused image after wavelet transform, Fig. 4(e) is the fused image after Contourlet transform, and Fig. 4(f) is the fused image after wavelet-Contourlet transform.
(a) Standard reference image (b) Blurred image to be fused on the left side (c) Blurred image to be fused on the right side
(d) Wavelet selection average (e) Contourlet selection average (f) wavelet-Contourlet selection average
The performance evaluation data of the three transformations for fusion is shown in Table 1.
Table 1 Performance comparison data analysis of three fusion transformations
It can be seen from Table 1 that compared with the wavelet transform, the wavelet-contourlet transform increases the spatial frequency by 11.27%, the deviation is 10.84% lower than the wavelet transform, and the cross-entropy is 23.81% lower than the wavelet transform.
The wavelet-Contourlet transform is almost identical to the spatial frequency compared to the Contourlet transform. The deviation is 1.91% lower than the Contourlet transform, and the cross entropy is 11.11% lower than the Contourlet transform. Therefore, considering the evaluation indicators, the effectiveness of the proposed algorithm is verified.
V. Conclusion
In this paper, an improved algorithm for selecting the weighted average of regional variances for high-frequency coefficients is proposed, and weighted average processing is used for low frequencies. Experiments show that the proposed algorithm can more easily obtain the details and edge information of the image. According to the experimental data, the deviation and cross entropy of the wavelet-Contourlet transform are lower than those of the wavelet transform and the Contourlet transform.
Therefore, under the same fusion rule, the image fusion algorithm based on wavelet-Contourlet transform and the weighted average of the region variance can obtain better results than the wavelet transform and Contourlet transform fusion algorithm.

