Data fusion provides a 55% increase over training on SAR data alone.

0
Jacob Shermeyer, Daniel Hogan, Jason Brown, Adam Van Etten, Nicholas Weir, Fabio Pacifici, Ronny Haensch, Alexei Bastidas, Scott Soenen, Todd Bacastow, Ryan Lewis
“We find that state-of-the-art segmentation models trained with multiple modalities outperform those trained with only a single type of data. Our experiments indicate that pre-training on optical data and using a transfer learning approach can provide a 55% increase in performance over training on SAR data alone.”

Within the remote sensing domain, a diverse set of acquisition modalities exist, each with their own unique strengths and weaknesses. Yet, most of the current literature and open datasets only deal with electro-optical (optical) data for different detection and segmentation tasks at high spatial resolutions. optical data is often the preferred choice for geospatial applications, but requires clear skies and little cloud cover to work well. Conversely, Synthetic Aperture Radar (SAR) sensors have the unique capability to penetrate clouds and collect during all weather, day and night conditions. Consequently, SAR data are particularly valuable in the quest to aid disaster response, when weather and cloud cover can obstruct traditional optical sensors. Despite all of these advantages, there is little open data available to researchers to explore the effectiveness of SAR for such applications, particularly at very-high spatial resolutions, i.e. <1m Ground Sample Distance (GSD).
To address this problem, we present an open Multi-Sensor All Weather Mapping (MSAW) dataset and challenge, which features two collection modalities (both SAR and optical). The dataset and challenge focus on mapping and building footprint extraction using a combination of these data sources. MSAW covers 120 km^2 over multiple overlapping collects and is annotated with over 48,000 unique building footprints labels, enabling the creation and evaluation of mapping algorithms for multi-modal data. We present a baseline and benchmark for building footprint extraction with SAR data and find that state-of-the-art segmentation models pre-trained on optical data, and then trained on SAR (F1 score of 0.21) outperform those trained on SAR data alone (F1 score of 0.135).

Comments: To appear in CVPR EarthVision Proceedings, 10 pages, 7 figures
Subjects: Image and Video Processing (eess.IV); Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2004.06500 [eess.IV]
(or arXiv:2004.06500v1 [eess.IV] for this version)
Share.

About Author

Synthetic Aperture Radar (SAR) or SAR Journal is an industry trade journal which tracks the worldwide SAR industry. We offer news, education, and insights to the SAR industry. We are operated, moderated and maintained by members of the SAR community.This profile is run by multiple moderators who all represent the SyntheticApertureRadar.com If you would like to submit news or have questions about a post please email us here: SyntheticApertureRadarmag@gmail.com and someone will get back to you.