Updated on Sep 7, 2023| By Qunying Huang
Remote sensing imagery and pattern recognition algorithms, such as support vector machines, and deep learning models, are increasingly used to map flood extent. However, current models rely on massive human-annotated training datasets in order to train a flood classification model; this is termed “supervised” learning. The need for real-time flood extent mapping and the high expense of data annotation point to a need for “unsupervised”, and “weakly supervised” methods for feature extraction with as little human effort as possible. As such, our lab has committed to developing an unsupervised learning method, as well as weakly-supervised framework for real-time flood extent mapping and damage assessment based on the state-of-the-artwork in remote sensing and deep learning. Through a series of research activities and scientific exploration, we are getting closer and closer to our goal.
RA#1. Development of a patch similarity convolutional neural network (PSNet) for flood extent mapping with a small number of training samples
Traditional remote sensing (RS) models rely heavily on manual feature engineering for image analysis. For instance, conventional urban flood mapping models based on machine learning require handcrafted features associated with floodwaters (e.g., strong absorption in the near-infrared band). Unfortunately, such features may not represent the underlying patterns of floods due to inconsistent illumination conditions, strong image background heterogeneity, and errors from geometric and radiometric rectification. In response, we have been exploring cutting-edge deep learning and computer vision algorithms to automate RS image processing for disaster damage assessment. We developed a patch similarity convolutional neural network (PSNet) using bi-temporal pre- and post-flood satellite multispectral imagery to delineate flooded areas in heterogeneous coastal cities (Figure 1). PSNet is an object-based deep learning model without the need for human-designed features and enables automatic object-based feature extraction for flooded object identification. It paves the way to accurately map floods over densely populated urban areas with only a small number of training samples labeled in less than 1 hour by a geospatial expert.
Figure 1. Architectures, case studies, and performance of the proposed PSNet models
For details about this work, please refer to our paper here
- Peng B., Meng Z., Huang Q., Wang C., 2019. Patch Similarity Convolutional Neural Network for Urban Flood Extent Mapping Using Bi-Temporal Satellite Multispectral Imagery. Remote Sensing, 11(21), 2492. DOI: 0.3390/rs11212492.
RA#2. Development of a Self-supervised Learning Framework for Urban Flood Mapping Without Laborious Training Data
Next, we enhanced PSNet models with a self-supervised learning framework for object-based urban flood mapping without laborious training data collection (Figure 3). Similarly, this framework was evaluated for two flood events in the United States: Hurricane Harvey (2017) in Houston, Texas, and Hurricane Florence (2018) in Lumberton, North Carolina. Results show that the method provides strong performance, and robust generalizability, especially the model with spectral filtering and spatial filtering. The evaluations of F1 and IoU (Intersection over Union, also known as the Jaccard Index) associated with the Florence flood were lower than those with the Harvey flood. These differences were due mainly to different degrees of non-flooded changes between the bi-temporal data corresponding to the Harvey and Florence floods over different study areas, respectively. The Florence final flood maps contain a higher rate of false positives (FPs) compared to the Harvey final flood maps. Potential causes include: (1) many FPs in the Florence final flood maps consist of pixels with spectra similar to those of floodwaters such as patches that are wet but not flooded, leading to misclassification in spectral filtering; and (2) quite a few FPs in the Florence final flood maps are mixed and connected with TPs, which were not removed by spatial filtering.
Figure 2. The proposed self-supervised learning framework and model performance tested on Hurricanes Harvey and Florence
In summary, this work maps heterogeneous urban floods with a high resolution based on self-supervised object-based image analysis, enabling real-time flood map production without time-consuming annotation of training data, and was published on J-STARS. Please click here for early access.
- Peng B., Huang Q., Vongkusolkit J., Gao S., Wright D., Fang Z. and Qiang Y., 2021. Urban Flood Mapping with Bi-temporal Multispectral Imagery via a Self-supervised Learning Framework. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. doi: 10.1109/JSTARS.2020.3047677.
RA#3. Near Real-Time Flood Mapping with Weakly Supervised Machine Learning
Moving forward, we developed various approaches for real-time flood extent mapping at the pixel level through weakly supervised learning techniques without the need of human labelling efforts. Specifically, we developed a weakly-supervised pixel-wise flood mapping method by leveraging multi-temporal remote sensing imagery and image processing techniques (e.g., thresholding, k-means clustering, and edge detection) to create weakly-labeled data for training the bi-temporal U-Net model for flood detection. This eliminates the time-consuming and labor-intensive task of human annotations.
Research outcomes was published on Remote Sensing:
- Vongkusolkit, J., Peng, B., Wu, M., Huang, Q. and Andresen, C.G., 2023. Near Real-Time Flood Mapping with Weakly Supervised Machine Learning. Remote Sensing, 15(13), p.3263.
The road is long, and there is much more work to do. Next, we will focus on the self-supervised remote sensing image representation learning informed by geospatial principles for object-based image analysis, as well as explore pixel-wise remote sensing image segmentation models to obtain better accuracy and efficiency in real-time flood mapping.