Wildfire Detection From Multisensor Satellite Imagery Using Deep Semantic Segmentation

Wildfire Detection From Multisensor Satellite Imagery Using Deep Semantic Segmentation

Abstract:

Deriving the extent of areas affected by wildfires is critical to fire management, protection of the population, damage assessment, and better understanding of the consequences of fires. In the past two decades, several algorithms utilizing data from Earth observation satellites have been developed to detect fire-affected areas. However, most of these methods require the establishment of complex functional relationships between numerous remote sensing data parameters. In contrast, more recently, deep learning has found its way into the application, having the advantage of being able to detect patterns in complex data by learning from examples automatically. In this article, a workflow for the detection of fire-affected areas from satellite imagery acquired in the visible, infrared, and microwave domains is described. Using this workflow, the fire detection potentials of four sources of freely available satellite imagery were investigated: the C-SAR instrument on board Sentinel-1, the multispectral instrument on board Sentinel-2, the sea and land surface temperature instrument on board Sentinel-3, and the MODIS instrument on board Terra and Aqua. For each of them, a single-input convolutional neural network based on the well-known U-Net architecture was trained on a newly created dataset. The performance of the resulting four single-instrument models was evaluated in presence of clouds and in clear conditions. In addition, the potential of combining predictions from pairs of single-instrument models was investigated. The results show that fusion of Sentinel-2 and Sentinel-3 data provides the best detection rate in clear conditions, whereas the fusion of Sentinel-1 and Sentinel-2 data shows a significant benefit in cloudy weather.