Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
Next Article in Journal
OpenHSI: A Complete Open-Source Hyperspectral Imaging Solution for Everyone
Next Article in Special Issue
Elevation Resolution Enhancement Method Using Non-Ideal Linear Motion Error of Airborne Array TomoSAR
Previous Article in Journal
Interpretation of the Spatiotemporal Evolution Characteristics of Land Deformation in Beijing during 2003–2020 Using Sentinel, ENVISAT, and Landsat Data
Previous Article in Special Issue
Multi-Rotor UAV-Borne PolInSAR Data Processing and Preliminary Analysis of Height Inversion in Urban Area
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SAR Image Fusion Classification Based on the Decision-Level Combination of Multi-Band Information

1
School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
2
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(9), 2243; https://doi.org/10.3390/rs14092243
Submission received: 22 April 2022 / Accepted: 5 May 2022 / Published: 7 May 2022
(This article belongs to the Special Issue Recent Progress and Applications on Multi-Dimensional SAR)

Abstract

:
Synthetic aperture radar (SAR) is an active coherent microwave remote sensing system. SAR systems working in different bands have different imaging results for the same area, resulting in different advantages and limitations for SAR image classification. Therefore, to synthesize the classification information of SAR images into different bands, an SAR image fusion classification method based on the decision-level combination of multi-band information is proposed in this paper. Within the proposed method, the idea of Dempster–Shafer evidence theory is introduced to model the uncertainty of the classification result of each pixel and used to combine the classification results of multiple band SAR images. The convolutional neural network is used to classify single-band SAR images. Calculate the belief entropy of each pixel to measure the uncertainty of single-band classification, and generate the basic probability assignment function. The idea of the term frequency-inverse document frequency in natural language processing is combined with the conflict coefficient to obtain the weight of different bands. Meanwhile, the neighborhood classification of each pixel in different band sensors is considered to obtain the total weight of each band sensor, generate weighted average BPA, and obtain the final ground object classification result after fusion. The validity of the proposed method is verified in two groups of multi-band SAR image classification experiments, and the proposed method has effectively improved the accuracy compared to the modified average approach.

Graphical Abstract

1. Introduction

Remote sensing image processing has been a hot research issue recently [1,2,3]. A synthetic aperture radar (SAR) is a high-resolution imaging radar that is not affected by climate and day and night. It has great application value. SAR image classification is used to classify each pixel of the SAR image into its corresponding category [4,5,6,7,8]. Single-band SAR images can obtain limited target information, while multi-band SAR systems can simultaneously perform high-resolution imaging in multiple bands [9,10], which can describe the characteristics of the surface more comprehensively. By fusing the classification results of multi-band SAR images, a more accurate and reliable classification result can be obtained than using only single-band image information.
In recent decades, the classification of SAR images has flourished [11,12,13,14,15]. According to whether the labeled data participates in the training process, the existing algorithms can be roughly divided into three categories: unsupervised learning-based, supervised learning-based and semi-supervised learning-based. Zhao et al. [16] proposed a discriminative deep belief network, which is used to build multiple weak classifiers to make decisions and input the decision features into the deep belief network to learn deep features for high-resolution SAR image classification. Hou et al. [17] proposed an algorithm combining a superpixel segmentation algorithm with stacked autoencoders for the classification of polarimetric SAR image features. The data processing uses a multilayer autoencoder to extract deep features and uses a superpixel segmentation method to optimize image classification results. Among them, a superpixel is a small area composed of a series of adjacent pixels with similar color, brightness, texture, and other characteristics. Most of these small areas retain effective information for further image segmentation and generally do not destroy the boundary information of objects in the image. Zhang et al. [18] proposed a deep ensemble model combining the gradient boosting method and convolutional neural network (CNN). This method performs well in optical remote sensing images. Zhou et al. [19] proposed a deep convolutional neural network that can automatically learn hierarchical polarization spatial features from the data, which can classify SAR images. Shang et al. [20] proposed a densely connected and deeply separable convolutional neural network, using depth separable convolution to replace standard convolution, and independently extracting features on each channel of the polarization image so as to achieve SAR image classification. Ni et al. [21] use long short-term memory networks for recurrent learning, random adjacent pixel blocks for data augmentation, and conditional random fields for post-processing, which performs well.
In general, the development of SAR image classification technology has become more and more mature, but how to comprehensively consider the features of multiple band images and accurately realize the classification of SAR images is still an unsolved problem. In this paper, in order to fully exploit the complementarity of multi-band classification information to complete the SAR classification of the same scene, a new decision fusion method called the SAR image classification method based on the decision-level combination of multi-band information is proposed. Within the proposed method, the Dempster–Shafer theory [22,23,24] is employed to model the uncertainty of the classification result of each pixel and used to combine the classification results of multiple band SAR images. Firstly, multi-band SAR image data are collected by sensors that are input to the CNN to obtain single band classification results. Secondly, the belief entropy [25] of each pixel classification is calculated to measure the uncertainty of the classification, and a basic probability assignment (BPA) is generated after the normalization for each band. Then, in terms of the idea of term frequency-inverse document frequency (TF-IDF) [26,27] and the neighborhood influence, the total weight for every band of each pixel is calculated to implement the weighted average combination of BPAs coming from multiple band images. Finally, the final classification result can be obtained according to the combined BPA. Our method uses decision fusion under the framework of evidence theory to measure the uncertainty of classification results of different bands. The evidence combination is used to fuse the classification results of different bands, which can reduce the uncertainty of the classification results of different bands and improve the classification accuracy. The difficulty of the decision fusion method is how to better measure the complementarity between the evidence. By introducing the idea of TF-IDF text mining into the conflict coefficient, we propose a new method to measure the similarity of evidence. Using this new method combined with neighborhood information can well measure the complementarity between pixels and, thus, obtain more accurate decision fusion results.
The rest of the paper is organized as follows: Section 2 introduces a single-band SAR image classification based on convolutional neural networks. Section 3 proposes a SAR image classification method based on the decision-level combination of multi-band information. In Section 4, two sets of multi-band SAR images are used to verify the effectiveness of the proposed method. In Section 5, a summary of the work in this paper is made.

2. Single-Band SAR Image Classification Based on Convolutional Neural Networks

2.1. Convolutional Neural Networks (CNN)

CNN adopts a weight-sharing network structure to reduce the number of weights and the connections between various layers of the network, which has been widely used in image processing [28], facial recognition [29,30], natural language processing [31,32], and other fields [33,34,35,36,37,38,39,40].

2.1.1. Convolutional Layers

Convolutional layers of CNN mainly conduct convolution operations, where image features can be extracted through convolution operations. The convolution operation is to slide the convolution kernels on the input matrix to find the dot product of the current area. After repeating the operation, the convolutional results can be obtained.

2.1.2. Pooling Layers

Pooling layers are connected after the convolutional layers to compress the extracted features, which can highlight the effective information. Maximum pooling and average pooling are generally applied on the pooling layers. Maximum pooling takes the maximum value of the current scan area, and the average pooling takes the average value of the current scan area.

2.1.3. Fully Connected Layer

The fully connected layer is used to integrate features extracted from the previous layer, which can be adopted for classification. The number of outputs is equal to the category number for classification, and all nodes of the fully connected layer are connected with the previous layer.

2.2. Single-Band SAR Image Classification

In our work, the CNN structure for single-band SAR image classification can be shown in Figure 1. The network is composed of three convolutional modules and three fully connected layers (FC layer).
Each convolutional module includes a convolutional layer with a convolution kernel of 3 × 3 , a BatchNorm layer, and a ReLU layer. The output of the convolution module is:
O u t 1 = R e L U ( B N ( f 1 ( x ) ) )
O u t 2 = R e L U ( B N ( f 2 ( O u t 1 ) ) )
O u t 3 = R e L U ( B N ( f 3 ( O u t 2 ) ) )
where, x is the input, O u t k is the output of the k convolutional module, k = 1 , 2 , 3 , f k represents convolutional operation, B N is a BatchNorm function, and ReLU is an activation function.
Fully connected layers are used for classification. The number of input and output nodes of the first FC layer is 3872 and 4096. The number of input and output nodes of the second FC layer is 4096 and 1024. The number of input and output nodes of the third FC layer is 1024 and the number of categories. The output of three FC layers is:
O u t = F C 3 ( F C 2 ( F C 1 ( O u t 3 ) ) )
where F C k is fully connected layer, k = 1 , 2 , 3 .
Patches of the single-band SAR image are imported to the network, and pixel-level classification results are output from the last fully connected layer.

3. SAR Image Classification Method Based on the Decision-Level Combination of Multi-Band Information

Assuming there are types of sensors with different wavebands, denoted as: X = { x 1 , x 2 , , x n } . Once the image data obtained by the sensors are classified, h types of categories for each pixel u i j will be generated, denoted as: Θ = { θ 1 , θ 2 , , θ h } . The flowchart of an SAR image classification method based on the decision-level combination of multi-band information is shown in Figure 2.
As can be seen from Figure 2, the classification results of each pixel in a single-band SAR image obtained in Section 2.2 are expressed as a probability matrix. The belief entropy uses Shannon entropy to measure the reliability between different pieces of evidence. The belief entropy of each pixel classification in the probability matrix is calculated to measure the uncertainty of the classification, and a BPA is generated for each band. In terms of the idea of TF-IDF, the weight of different sensors is calculated. Then, considering the influence of the classification results on the neighborhood of SAR images in each band, the neighborhood influence weight is calculated. The two weights are multiplied, and the total weight is obtained after normalization. The weighted average of the BPAs of different bands is used to obtain the average BPA, which is combined to obtain the final classification result.

3.1. Construction of Basic Probability Assignment (BPA) Functions of the Sensors’ Classification

3.1.1. Construct the Probability Matrix

By using the method in Section 2.2 to classify each pixel u i j in the image, a probability matrix Q can be obtained: Q = [ q 11 q 1 b q 1 n q a 1 q a b q a n q h 1 q h b q h n ] , where, q a b represents the probability that the sensor x b recognizes the pixel u i j as the feature category θ a , a = 1 , 2 , , h , b = 1 , 2 , , n , i , j are the x and y coordinates of the pixel point.

3.1.2. Construct the BPAs

Suppose Θ = { θ 1 , θ 2 , , θ h } is a frame of discernment, the cardinality of Θ is h , then the number of elements in the power set 2 Θ of Θ is 2 h , and a basic probability assignment (BPA) function m should satisfy: m ( ) = 0 and A Θ m ( A ) = 1 , where is the empty set, and A is a subset of Θ , m ( A ) is the basic probability number of proposition A . For the probabilities of pixel u i j ’s classification result generated by each sensor x b , an unnormalized BPA can be generated as: m ^ b ( { θ a } ) = q a b . Then, the belief entropy is used to calculate the uncertainty of m ^ b and assign it to the mass of Θ : m ^ b ( { Θ } ) = θ Θ m ( { θ } ) ln m ( { θ } ) 2 | θ | 1 , where, | θ | represents the cardinality of proposition θ . At last, we normalize m ^ b to obtain m b ( { θ a } ) = m ^ b ( { θ a } ) a = 1 h m ^ b ( { θ a } ) + m ^ b ( { Θ } ) , for any θ a Θ and m b ( { Θ } ) = m ^ b ( { Θ } ) a = 1 h m ^ b ( { θ a } ) + m ^ b ( { Θ } ) . Calculating the belief entropy [41,42,43] of the classification results of each pixel can measure the uncertainty of the classification results of SAR images. Since evidence theory is a method that can effectively fuse uncertainty information, the evidence fusion can achieve more accurate classification while reducing uncertainty.

3.2. Calculate the Weights of Sensors in Different Bands

Here, we calculate the conflict coefficient between the sensors’ classification results of each band to obtain the degree of conflict between different sensors. If the sum of the conflict coefficients between a certain sensor and the remaining sensors is larger, it means that the result obtained by this sensor is more inconsistent with the judgment of other sensors. By combining the conflict coefficient and the idea of TF-IDF, the weights of different sensors are obtained.

3.2.1. Calculate the Conflict Coefficient between the BPA of Sensors in Different Bands

For two BPAs, m b and m c , where b and c represent the serial numbers of the sensors in different bands, the conflict coefficient [44] is used to measure the degree of conflict between m b and m c . The greater the conflict, the greater the value of the conflict coefficient. The formula is as follows: K b c = 1 2 [ k b c + d B P A ( m b , m c ) ] , where k b c is the classic conflict coefficient, expressed as: k b c = B D = ϕ B D ϕ m b ( B ) m c ( D ) , and d B P A ( m b , m c ) is the Jousselme evidence distance: d B P A ( m b , m c ) = 1 2 ( m b m c ) T D _ _ ( m b m c ) ,where D _ _ is a 2 n × 2 n   matrix whose elements are D _ _ = | B D | | B D | , where B and D are the subsets of the frame of discernment.

3.2.2. Calculate the TF-IDF Weights of Sensors in Different Bands

TF-IDF is a keyword extraction method: TF-IDF = TF × IDF, where T F represents the number of occurrences of a term in the article, I D F weights the value of T F according to the importance of the term in the corpus, where I D F = log ( C t o t a l C n u m b e r + 1 ) , where C t o t a l represents the total number of articles in the corpus, C n u m b e r represents the number of articles containing the term. Here, in terms of IDF’s weighting idea, suppose there are n BPAs for a certain BPA m b , b = 1 , 2 , n if it is assumed to completely conflict with the rest of n 1 BPAs; that is, the conflict coefficient is 1, the value of C t o t a l should be n 1 . Actually, the sum of the conflict coefficients of m b with the rest of the n 1 BPAs is b = 1 , c b n 1 K b c . Therefore, the weights of sensors in different bands can be obtained: for sensor b , an unnormalized weight of m b is obtained by the following formula: w b 1 = log ( n 1 b = 1 , c b n 1 K b c ) b = 1 , 2 , n , where K b c is the conflict coefficient between the BPA of m b and m c . Then, the unnormalized n are normalized as follows: w b 1 = w b 1 b = 1 n w b 1 b = 1 , 2 , n .

3.3. Calculate the Neighborhood Influence Weight and Total Weight

Given the classification result of a sensor x b for each pixel u i j the weight of the influence of u i j ’s δ × δ neighborhood block to u i j ’s classification result is calculated as follows: w b 2 = N u m δ 2 1 , b = 1 , 2 , , n , where N u m represents the number of classification result of pixels in u i j ’s δ × δ neighborhood block whose classification results are same as u i j ’s. At last, by synthesizing the two weights w b 1 and w b 2 , a total weight w b (for u i j in sensor x b ’s classification results) can be obtained as: w b = w b 1 w b 2 b = 1 n w b 1 .

3.4. Fusion

For every pixel u i j , its classification results from n sensors, m b , b = 1 , , n , are used to generate a weighted average BPA in terms of different weights w b , b = 1 , , n : m ¯ ( { θ a } ) = b = 1 n w b m b ( { θ a } ) , θ a Θ . Then, we use Dempster’s rule to fuse m ¯ for n 1 times, m = m ¯ m ¯ m ¯ , where is the Dempster’s rule for the classification of two BPAs: { m ( E ) = 1 1 k b c B D = E m b ( B ) m c ( D ) , E Θ   and   E m ( ) = 0 . After fusion, the final classification result for pixel u i j is obtained.

4. Results

Two sets of multi-band SAR image data are used to verify the effectiveness of the proposed method according to the classification accuracy, and the modified average approach [45] is used for quantitative and qualitative comparisons. The modified average approach is a classic evidence combination method, which consists of weighted decisions based on the distance of evidence similar to our proposed method. The intermediate visual effects of the two methods are shown in Figure 3 and Figure 4. For the proposed method, the computational complexity of constructing the BPA part is O ( h ) , the computational complexity of calculating the conflict coefficient part is O ( 2 h × 2 h ) , the computational complexity of calculating the Jousselme evidence distance is O ( 2 h × 2 h ) , and the computational complexity of the fusion part is O ( 2 h × 2 h ) . For the modified average approach, the computational complexity of constructing the BPA part is O ( h ) , the computational complexity of calculating the Jousselme evidence distance is O ( 2 h × 2 h ) , and the computational complexity of the fusion part is O ( 2 h × 2 h ) . The single-band SAR image classification experiment is completed under the Pytorch framework, Adam is set as the optimizer, CrossEntropyLoss is the loss function, the learning rate is 0.001 and the number of trainings is 100. All decision-level combination experiments are run in a MATLAB R2021a environment. The size of the δ × δ neighborhood block is 9 × 9 . For a single pixel, the time required to fuse the first data set using the proposed algorithm and the comparison algorithm is 0.0201 and 0.0193 s, respectively, and the time required to fuse the second data set using the proposed algorithm and the comparison algorithm is 0.0510 and 0.0384 s, respectively.

4.1. Preprocessing

After the multi-band data are collected by sensors, the size and coordinates are not uniform. Therefore, for the joint classification of the multi-band images, we need image preprocessing including mosaicking, registration and cropping. Taking the images acquired in Dongying City as an example, Figure 5, Figure 6 and Figure 7 show the preprocessing process based on ENVI 5.1.

4.2. Experimental with the Dongying City Dataset

In order to verify the effectiveness of the method, the first set of multi-band SAR images were captured by C-band SAR, L-band SAR, and P-band SAR sensors at the Yellow River estuary in Dongying City, Shandong Province. Among them, the acquisition time of C-band SAR is 25 November 2019, with a resolution of 0.5 m, the L-band SAR was collected on 26 November 2019, with a resolution of 3.0 m and the acquisition time of P-band SAR is 25 November 2019, with a resolution of 3.0 m. Figure 8 shows the images from these sensors.
The labels of these images are manually labeled, which contains three categories, farmlands, buildings, and roads. The label is shown in Figure 9.
At first, we use the single-band SAR image classification method in Section 2.2 to classify the SAR images of the C, L, and P bands, respectively. Then, we use the method mentioned in Section 3 to achieve decision fusion of multi-band SAR images to obtain the final result. The single band classification result and the decision fusion result are obtained, as shown in Figure 10 and Figure 11. The accuracy of single-band and multi-band fusion is shown in Table 1.
It can be seen from Figure 10 and Figure 11 and Table 1 that the classification accuracy of C, L, and P bands are 66.32%, 64.08%, 63.17%, and 68.19%. The classification accuracy of the modified average approach and our decision fusion method is 68.67% and 70.34%. Among them, the highest accuracy is achieved by the decision fusion method proposed in this paper. The accuracy of the proposed method is 7.17% higher than the single-band’s minimum accuracy and 4.02% higher than the single-band’s highest accuracy. Compared with the modified average approach, the accuracy of our method is improved by 1.67%. For the C-band classification results, a large number of farmlands is classified as roads, while for the P-band classification results, the roads are not effectively identified. As can be seen from Figure 11a, the modified average approach incorrectly classifies a large number of farmlands into roads, which significantly reduces the classification accuracy. By contrast, the proposed method can more accurately classify the three categories.

4.3. Experimental with the Baotou City Dataset

The second group of multi-band SAR images was captured by C-band SAR, L-band SAR, P-band SAR, and X-band SAR sensors at the Baotou calibration field in Baotou, Inner Mongolia Autonomous Region, and the acquisition time is 2 November 2019, with a resolution of 0.5 m. Among them, the resolution of the C-band is 0.5 m, and the resolution of the L, P, and X-bands is 3.0 m. Figure 12 shows the images.
Similarly, all pictures are manually labeled, as shown in Figure 13, which contains four categories, pits, bare soil, buildings and roads. For these images, the classification results of the single band and the fusion method proposed in this paper are shown in Figure 14 and Figure 15 and Table 2.
It can be seen from Figure 14 and Figure 15 and Table 2 that the classification accuracy of C, L, P and X bands are 59.85%, 62.39%, 62.07% and 67.75%. The classification accuracy of the modified average approach and our decision fusion method is 68.97% and 69.26%. Among them, the highest accuracy is achieved by the decision fusion proposed in this paper. The accuracy of the proposed method is 9.41% higher than the single-band’s minimum accuracy, and 1.51% higher than the single-band highest accuracy. Compared with the modified average approach, the accuracy of our method is improved by 0.29%. By analyzing the single-band classification result maps, most of the bare soil area in C-band SAR images was identified as buildings. Moreover, the P-band classification does not effectively identify the road. As can be seen from Figure 15, many of the bare soils of (a) were classified into pits, and our method greatly alleviated this phenomenon. In general, the proposed method can more accurately classify the four categories.

5. Conclusions

In this paper, an SAR image fusion classification method based on the decision-level combination of multi-band information is proposed. The idea of evidence theory in decision fusion is introduced into the SAR image classification process, and the results of classification by sensors of different bands are merged to improve the final classification accuracy. Belief entropy is used to measure the uncertainty of the single-band classification result of each pixel, and the uncertainty value is assigned to the full set to obtain the basic probability assignment function. In terms of the idea of TF-IDF in natural language processing and conflict coefficients, the weights of sensors in different bands are obtained. At the same time, considering the neighborhood classification of each pixel in different band sensors, the total weight of each sensor is obtained, and weighted average BPA is generated. After fusion, the final classification result is obtained. Experimental results on two groups of multi-band SAR images demonstrate the effectiveness of our fusion classification method.

Author Contributions

Conceptualization, J.Z. and J.P.; Methodology, J.Z. and J.P.; Formal analysis, X.Y. and W.J.; Investigation, X.Y.; Project administration, J.P.; Validation, W.J.; Writing—original draft, J.Z.; Writing—review & editing, P.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Application Verification of China High-resolution Airborne Earth Observation System, grant number 30-H30C01-9004-19/21, and the APC was funded by Application Verification of China High-resolution Airborne Earth Observation System.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restrictions of privacy.

Acknowledgments

The multi-band SAR data used in this paper, including four bands of P\L\C\X, was obtained with the support of Aeronautic Remote Sensing System, China’s Large research infrastructures. We would like to express our thanks to this Infrastructure.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sun, X.; Wang, B.; Wang, Z.; Li, H.; Li, H.; Fu, K. Research Progress on Few-Shot Learning for Remote Sensing Image Interpretation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2387–2402. [Google Scholar] [CrossRef]
  2. He, Q.; Sun, X.; Yan, Z.; Fu, K. DABNet: Deformable Contextual and Boundary-Weighted Network for Cloud Detection in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16. [Google Scholar] [CrossRef]
  3. He, Q.; Sun, X.; Yan, Z.; Li, B.; Fu, K. Multi-Object Tracking in Satellite Videos with Graph-Based Multitask Modeling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  4. Cerentini, A.; Welfer, D.; D’Ornellas, M.C.; Haygert, C.J.P.; Dotto, G.N. Automatic identification of glaucoma using deep learning methods. Stud. Health Technol. Inform. 2017, 245, 318–321. [Google Scholar]
  5. Tombak, A.; Turkmenli, I.; Aptoula, E.; Kayabol, K. Pixel-Based Classification of SAR Images Using Feature Attribute Profiles. IEEE Geosci. Remote Sens. Lett. 2018, 16, 564–567. [Google Scholar] [CrossRef]
  6. Sun, Z.; Li, J.; Liu, P.; Cao, W.; Yu, T.; Gu, X. SAR Image Classification Using Greedy Hierarchical Learning with Unsupervised Stacked CAEs. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5721–5739. [Google Scholar] [CrossRef]
  7. Wang, J.; Hou, B.; Jiao, L.; Wang, S. POL-SAR Image Classification Based on Modified Stacked Autoencoder Network and Data Distribution. IEEE Trans. Geosci. Remote Sens. 2019, 58, 1678–1695. [Google Scholar] [CrossRef]
  8. Zhao, Z.; Jia, M.; Wang, L. High-Resolution SAR Image Classification via Multiscale Local Fisher Patterns. IEEE Trans. Geosci. Remote Sens. 2020, 59, 10161–10178. [Google Scholar] [CrossRef]
  9. Singha, S.; Johansson, M.; Hughes, N.; Hvidegaard, S.M.; Skourup, H. Arctic Sea Ice Characterization Using Spaceborne Fully Polarimetric L-, C-, and X-Band SAR with Validation by Airborne Measurements. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3715–3734. [Google Scholar] [CrossRef]
  10. Del Frate, F.; Latini, D.; Scappiti, V. On neural networks algorithms for oil spill detection when applied to C-and X-band. In Proceedings of the SAR[C]//2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 5249–5251. [Google Scholar]
  11. Huang, Z.; Dumitru, C.O.; Pan, Z.; Lei, B.; Datcu, M. Classification of Large-Scale High-Resolution SAR Images with Deep Transfer Learning. IEEE Geosci. Remote Sens. Lett. 2020, 18, 107–111. [Google Scholar] [CrossRef] [Green Version]
  12. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Gill, E.; Molinier, M. A new fully convolutional neural network for semantic segmentation of polarimetric SAR imagery in complex land cover ecosystem. ISPRS J. Photogramm. Remote Sens. 2019, 151, 223–236. [Google Scholar] [CrossRef]
  13. Yue, Z.; Gao, F.; Xiong, Q.; Wang, J.; Huang, T.; Yang, E.; Zhou, H. A Novel Semi-Supervised Convolutional Neural Network Method for Synthetic Aperture Radar Image Recognition. Cogn. Comput. 2019, 13, 795–806. [Google Scholar] [CrossRef] [Green Version]
  14. Hong, D.; Yokoya, N.; Xia, G.S.; Chanussot, J.; Zhu, X.X. X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data. ISPRS J. Photogramm. Remote Sens. 2020, 167, 12–23. [Google Scholar] [CrossRef] [PubMed]
  15. Rostami, M.; Kolouri, S.; Eaton, E.; Kim, K. Deep Transfer Learning for Few-Shot SAR Image Classification. Remote Sens. 2019, 11, 1374. [Google Scholar] [CrossRef] [Green Version]
  16. Zhao, Z.; Jiao, L.; Zhao, J.; Gu, J.; Zhao, J. Discriminant deep belief network for high-resolution SAR image classification. Pattern Recognit. 2017, 61, 686–701. [Google Scholar] [CrossRef]
  17. Hou, B.; Kou, H.; Jiao, L. Classification of Polarimetric SAR Images Using Multilayer Autoencoders and Superpixels. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3072–3081. [Google Scholar] [CrossRef]
  18. Zhang, F.; Du, B.; Zhang, L. Scene Classification via a Gradient Boosting Random Convolutional Network Framework. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1793–1802. [Google Scholar] [CrossRef]
  19. Zhou, Y.; Wang, H.; Xu, F.; Jin, Y.-Q. Polarimetric SAR Image Classification Using Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1935–1939. [Google Scholar] [CrossRef]
  20. Shang, R.; He, J.; Wang, J.; Xu, K.; Jiao, L.; Stolkin, R. Dense connection and depthwise separable convolution based CNN for polarimetric SAR image classification. Knowl.-Based Syst. 2020, 194, 105542. [Google Scholar] [CrossRef]
  21. Ni, J.; Zhang, F.; Yin, Q.; Zhou, Y.; Li, H.-C.; Hong, W. Random Neighbor Pixel-Block-Based Deep Recurrent Learning for Polarimetric SAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7557–7569. [Google Scholar] [CrossRef]
  22. Deng, J.; Deng, Y.; Cheong, K.H. Combining conflicting evidence based on Pearson correlation coefficient and weighted graph. Int. J. Intell. Syst. 2021, 36, 7443–7460. [Google Scholar] [CrossRef]
  23. Zhao, J.; Deng, Y. Complex Network Modeling of Evidence Theory. IEEE Trans. Fuzzy Syst. 2020, 29, 3470–3480. [Google Scholar] [CrossRef]
  24. Li, R.; Chen, Z.; Li, H.; Tang, Y. A new distance-based total uncertainty measure in Dempster-Shafer evidence theory. Appl. Intell. 2021, 52, 1209–1237. [Google Scholar] [CrossRef]
  25. Deng, Y. Deng entropy. Chaos Solitons Fractals 2016, 91, 549–553. [Google Scholar] [CrossRef]
  26. Christian, H.; Agus, M.P.; Suhartono, D. Single Document Automatic Text Summarization using Term Frequency-Inverse Document Frequency (TF-IDF). ComTech: Comput. Math. Eng. Appl. 2016, 7, 285–294. [Google Scholar] [CrossRef]
  27. Havrlant, L.; Kreinovich, V. A simple probabilistic explanation of term frequency-inverse document frequency (TF-IDF) heuristic (and variations motivated by this explanation). Int. J. Gen. Syst. 2017, 46, 27–36. [Google Scholar] [CrossRef] [Green Version]
  28. Bernal, J.; Kushibar, K.; Asfaw, D.S.; Valverde, S.; Oliver, A.; Martí, R.; Lladó, X. Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: A review. Artif. Intell. Med. 2018, 95, 64–81. [Google Scholar] [CrossRef] [Green Version]
  29. Melinte, D.O.; Vladareanu, L. Facial Expressions Recognition for Human–Robot Interaction Using Deep Convolutional Neural Networks with Rectified Adam Optimizer. Sensors 2020, 20, 2393. [Google Scholar] [CrossRef]
  30. Agrawal, A.; Mittal, N. Using CNN for facial expression recognition: A study of the effects of kernel size and number of filters on accuracy. Vis. Comput. 2019, 36, 405–412. [Google Scholar] [CrossRef]
  31. Liu, S.; Tang, B.; Chen, Q.; Wang, X. Drug-Drug Interaction Extraction via Convolutional Neural Networks. Comput. Math. Methods Med. 2016, 2016, 6918381. [Google Scholar] [CrossRef] [Green Version]
  32. Olthof, A.W.; van Ooijen, P.M.A.; Cornelissen, L.J. Deep Learning-Based Natural Language Processing in Radiology: The Impact of Report Complexity, Disease Prevalence, Dataset Size, and Algorithm Type on Model Performance. J. Med. Syst. 2021, 45, 1–16. [Google Scholar] [CrossRef] [PubMed]
  33. Dong, P.; Zhang, H.; Li, G.Y.; Gaspar, I.S.; Naderializadeh, N. Deep CNN-Based Channel Estimation for mmWave Massive MIMO Systems. IEEE J. Sel. Top. Signal Process. 2019, 13, 989–1000. [Google Scholar] [CrossRef] [Green Version]
  34. Wei, Y.; Zhao, Y.; Lu, C.; Wei, S.; Liu, L.; Zhu, Z.; Yan, S. Cross-Modal Retrieval with CNN Visual Features: A New Baseline. IEEE Trans. Cybern. 2016, 47, 449–460. [Google Scholar] [CrossRef] [PubMed]
  35. Wei, Y.; Xia, W.; Lin, M.; Huang, J.; Ni, B.; Dong, J.; Zhao, Y.; Yan, S. HCP: A Flexible CNN Framework for Multi-Label Image Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 1901–1907. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Zhang, Z.; Wang, H.; Xu, F.; Jin, Y.-Q. Complex-Valued Convolutional Neural Network and Its Application in Polarimetric SAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7177–7188. [Google Scholar] [CrossRef]
  37. Tang, H.; Xiao, B.; Li, W.; Wang, G. Pixel convolutional neural network for multi-focus image fusion. Inf. Sci. 2018, 433, 125–141. [Google Scholar] [CrossRef]
  38. Liu, Q.; Xiao, L.; Yang, J.; Wei, Z. CNN-Enhanced Graph Convolutional Network with Pixel- and Superpixel-Level Feature Fusion for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 8657–8671. [Google Scholar] [CrossRef]
  39. Zhao, X.; Tao, R.; Li, W.; Li, H.-C.; Du, Q.; Liao, W.; Philips, W. Joint Classification of Hyperspectral and LiDAR Data Using Hierarchical Random Walk and Deep CNN Architecture. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7355–7370. [Google Scholar] [CrossRef]
  40. Cheng, G.; Yan, B.; Shi, P.; Li, K.; Yao, X.; Guo, L.; Han, J. Prototype-CNN for Few-Shot Object Detection in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–10. [Google Scholar] [CrossRef]
  41. Xu, S.; Hou, Y.; Deng, X.; Ouyang, K.; Zhang, Y.; Zhou, S. Conflict Management for Target Recognition Based on PPT Entropy and Entropy Distance. Energies 2021, 14, 1143. [Google Scholar] [CrossRef]
  42. Xue, Y.; Deng, Y. Interval-valued belief entropies for Dempster–Shafer structures. Soft Comput. 2021, 25, 8063–8071. [Google Scholar] [CrossRef] [PubMed]
  43. Zhou, M.; Zhu, S.-S.; Chen, Y.-W.; Wu, J.; Herrera-Viedma, E. A Generalized Belief Entropy with Nonspecificity and Structural Conflict. IEEE Trans. Syst. Man Cybern. Syst. 2021, 1–14. [Google Scholar] [CrossRef]
  44. Jiang, W.; Peng, J.; Deng, Y. New representation method of evidential conflict. Syst. Eng. Electron. 2010, 32, 562–565. [Google Scholar]
  45. Yong, D.; WenKang, S.; ZhenFu, Z.; Qi, L. Combining belief functions based on distance of evidence. Decis. Support Syst. 2004, 38, 489–493. [Google Scholar] [CrossRef]
Figure 1. The network structure for single-band SAR classification.
Figure 1. The network structure for single-band SAR classification.
Remotesensing 14 02243 g001
Figure 2. The flowchart of an SAR image classification method based on the decision-level combination of multi-band information.
Figure 2. The flowchart of an SAR image classification method based on the decision-level combination of multi-band information.
Remotesensing 14 02243 g002
Figure 3. Intermediate visual effects of the proposed decision fusion method.
Figure 3. Intermediate visual effects of the proposed decision fusion method.
Remotesensing 14 02243 g003
Figure 4. Intermediate visual effects of the modified average approach.
Figure 4. Intermediate visual effects of the modified average approach.
Remotesensing 14 02243 g004
Figure 5. (a) Two L-band SAR images. (b) Two P-band SAR images. (c) Three C-band SAR images.
Figure 5. (a) Two L-band SAR images. (b) Two P-band SAR images. (c) Three C-band SAR images.
Remotesensing 14 02243 g005
Figure 6. Image mosaicking. (a) L-band SAR images. (b) P-band SAR images. (c) C-band SAR images.
Figure 6. Image mosaicking. (a) L-band SAR images. (b) P-band SAR images. (c) C-band SAR images.
Remotesensing 14 02243 g006
Figure 7. Image registration and cropping. (a) L-band SAR images. (b) P-band SAR images. (c) C-band SAR images.
Figure 7. Image registration and cropping. (a) L-band SAR images. (b) P-band SAR images. (c) C-band SAR images.
Remotesensing 14 02243 g007
Figure 8. (a) C-band SAR. (b) L-band SAR. (c) P-band SAR.
Figure 8. (a) C-band SAR. (b) L-band SAR. (c) P-band SAR.
Remotesensing 14 02243 g008
Figure 9. Groud truth map of Dongying City Dataset.
Figure 9. Groud truth map of Dongying City Dataset.
Remotesensing 14 02243 g009
Figure 10. (a) C-band SAR classification results. (b) L-band SAR classification results. (c) P-band SAR classification results.
Figure 10. (a) C-band SAR classification results. (b) L-band SAR classification results. (c) P-band SAR classification results.
Remotesensing 14 02243 g010
Figure 11. (a) Modified average approach result of Dongying City Dataset. (b) Our method result of Baotou City Dataset.
Figure 11. (a) Modified average approach result of Dongying City Dataset. (b) Our method result of Baotou City Dataset.
Remotesensing 14 02243 g011
Figure 12. (a) C-band SAR. (b) L-band SAR. (c) P-band SAR. (d) X-band SAR.
Figure 12. (a) C-band SAR. (b) L-band SAR. (c) P-band SAR. (d) X-band SAR.
Remotesensing 14 02243 g012
Figure 13. Groud truth map of Baotou City Dataset.
Figure 13. Groud truth map of Baotou City Dataset.
Remotesensing 14 02243 g013
Figure 14. (a) C-band SAR classification results. (b) L-band SAR classification results. (c) P-band SAR classification results. (d) X-band SAR classification results.
Figure 14. (a) C-band SAR classification results. (b) L-band SAR classification results. (c) P-band SAR classification results. (d) X-band SAR classification results.
Remotesensing 14 02243 g014aRemotesensing 14 02243 g014b
Figure 15. (a) Modified average approach result of Baotou City Dataset. (b) Our method result of Baotou City Dataset.
Figure 15. (a) Modified average approach result of Baotou City Dataset. (b) Our method result of Baotou City Dataset.
Remotesensing 14 02243 g015
Table 1. Classification accuracy.
Table 1. Classification accuracy.
C-Band SAR
Classification
L-Band SAR
Classification
P-Band SAR
Classification
Modified Average ApproachOur Method
Accuracy66.32%64.08%63.17%68.67%70.34%
Table 2. Classification accuracy.
Table 2. Classification accuracy.
C-Band SAR
Classification
L-Band SAR
Classification
P-Band SAR
Classification
X-Band SAR
Classification
Modified Average ApproachFusion
Result
Accuracy59.85%62.39%62.07%67.75%68.97%69.26%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhu, J.; Pan, J.; Jiang, W.; Yue, X.; Yin, P. SAR Image Fusion Classification Based on the Decision-Level Combination of Multi-Band Information. Remote Sens. 2022, 14, 2243. https://doi.org/10.3390/rs14092243

AMA Style

Zhu J, Pan J, Jiang W, Yue X, Yin P. SAR Image Fusion Classification Based on the Decision-Level Combination of Multi-Band Information. Remote Sensing. 2022; 14(9):2243. https://doi.org/10.3390/rs14092243

Chicago/Turabian Style

Zhu, Jinbiao, Jie Pan, Wen Jiang, Xijuan Yue, and Pengyu Yin. 2022. "SAR Image Fusion Classification Based on the Decision-Level Combination of Multi-Band Information" Remote Sensing 14, no. 9: 2243. https://doi.org/10.3390/rs14092243

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop