Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Cloud cover experiences rapid fluctuations, significantly impacting the irradiance reaching the ground and causing frequent variations in photovoltaic power output. Accurate detection of thin and fragmented clouds is crucial for reliable photovoltaic power generation forecasting. In this paper, we introduce a novel cloud detection method, termed Adaptive Laplacian Coordination Enhanced Cross-Feature U-Net (ALCU-Net). This method augments the traditional U-Net architecture with three innovative components: an Adaptive Feature Coordination (AFC) module, an Adaptive Laplacian Cross-Feature U-Net with a Multi-Grained Laplacian-Enhanced (MLE) feature module, and a Criss-Cross Feature Fused Detection (CCFE) module. The AFC module enhances spatial coherence and bridges semantic gaps across multi-channel images. The Adaptive Laplacian Cross-Feature U-Net integrates features from adjacent hierarchical levels, using the MLE module to refine cloud characteristics and edge details over time. The CCFE module, embedded in the U-Net decoder, leverages criss-cross features to improve detection accuracy. Experimental evaluations show that ALCU-Net consistently outperforms existing cloud detection methods, demonstrating superior accuracy in identifying both thick and thin clouds and in mapping fragmented cloud patches across various environments, including oceans, polar regions, and complex ocean-land mixtures.
Cloud detection in remote sensing images is a crucial task in various applications, such as meteorological disaster prediction and earth resource exploration, which require accurate cloud identification. This work proposes a cloud detection model based on the Cloud Detection neural Network (CDNet), incorporating a fusion mechanism of channel and spatial attention. Depthwise separable convolution is adopted to achieve a lightweight network model and enhance the efficiency of network training and detection. In addition, the Convolutional Block Attention Module (CBAM) is integrated into the network to train the cloud detection model with attention features in channel and spatial dimensions. Experiments were conducted on Landsat 8 imagery to validate the proposed improved CDNet. Averaged over all testing images, the overall accuracy (OA), mean Pixel Accuracy (mPA), Kappa coefficient and Mean Intersection over Union (MIoU) of improved CDNet were 96.38%, 81.18%, 96.05%, and 84.69%, respectively. Those results were better than the original CDNet and DeeplabV3+. Experiment results show that the improved CDNet is effective and robust for cloud detection in remote sensing images.
The Day/Night Band (DNB) low-light visible sensor, mounted on the Suomi National Polar-orbiting Partnership (S-NPP) satellite, can measure visible radiances from the earth and atmosphere (solar/lunar reflection, natural/anthropogenic nighttime light emissions) during both the day and night. In particular, it has achieved unprecedented nighttime lowlight-level imaging with its accurate radiometric calibration and splendid spatio-temporal resolution. Based on the superior characteristics of DNB, a multi-channel threshold algorithm combining DNB with other VIIRS channels was proposed to monitor nighttime fog/low stratus. Through a gradual separation of underlying surface (land, vegetation, water bodies), snow, and medium/high clouds, a fog/low stratus region could ultimately be extracted by the algorithm. Algorithmic feasibility then was verified by a typical case of heavy fog/low stratus in China, 2012. The experimental results demonstrate that the outcomes of the algorithm approximately coincide with the ground measured results.