Deep learning for single-image super-resolution in remote sensing: A review

Download Remote Sensing Single Image Super Resolution A Review

Remote sensing image super-resolution has become a critical task to enhance spatial details for downstream applications such as land cover mapping, environmental monitoring, and precision agriculture. However, the unique characteristics of remote sensing data – including limited training samples, complex sensor degradations, and spectral diversity – pose significant challenges to conventional super-resolution pipelines. In this paper, we present a comprehensive review of recent advances in remote sensing single image super resolution (RSSISR), spanning across supervised, self-supervised, unsupervised, training-free, generative adversarial network-based, diffusion-based, and physically/statistically guided frameworks. We introduce a structured taxonomy that organizes these approaches and analyzes their strengths, limitations, and application domains. In addition, we identify key challenges in the field, including (1) data scarcity, (2) generalization gaps, (3) high computational cost, (4) lack of task-oriented evaluation, and (5) limited physical integration in degradation process modelling. By pinpointing these challenges, we outline promising research directions including (1) data augmentation and enhancement, (2) generative priors and large foundation models, (3) learning paradigms less reliant on supervision, (4) application-aware frameworks, (5) physics-guided modelling, and (6) multi-task learning. Case studies across real-world tasks such as crop monitoring, spatio-temporal image upscaling, flood mapping, and object detection further illustrate the practical utility of different RSSISR strategies. This review not only summarizes the current landscape but also provides a forward-looking perspective on the future of RSSISR.