Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The spread of invasive plant species is on a planetary scale and has significant environmental, economic, and social consequences. Operational monitoring of invasive plant species over large areas is a problem that can only be solved with the help of remote sensing. Invasions of woody plants can have wider ecological consequences than those of herbaceous plants. However, it is quite difficult to identify trees’ species composition since they show slight variations in their phenological cycles, grow densely, and are often diffusely distributed in communities. Great hopes for the identification of woody plants are placed on hyperspectral cameras. Despite significant progress in identifying woody plants using remote sensing, one serious problem remains: the non-reproducibility of the results over time. Now, it is not clear whether this is an insurmountable technological barrier. The answer to this question can be found through laboratory experiments using modern hyperspectral cameras. This chapter presents methods for selecting regions of interest, informative spectral channels, and vegetation indices, and it reports the results of random forest classification of invasive woody plants by time series of spectral characteristics. The study results show that species successfully identified using one algorithm on one calendar date can be successfully identified using the same algorithm on other calendar dates.
Machine learning and deep learning (DL) in particular have made a huge impact on many fields of science. In the last decade, advanced deep learning methods have been developed and applied to Earth data science problems extensively. Applications on classification and parameter retrieval are making a difference: methods are very accurate, can handle large amounts of data and deal with spatial and temporal data structures efficiently. Nevertheless, several important challenges still need to be addressed. Current standard deep architectures struggle to learn useful Earth feature representations in an unsupervised way, and struggle with long-range dependencies so distant driving processes (in space and time) are not captured, and they cannot cope with non-Euclidean spaces efficiently. DL models are still obscure and resistant to interpretability too and, as other data-driven techniques, they do not necessarily learn physically meaningful and, more importantly, causal relations. Advances are needed to cope with arbitrary signal structures and data relations, physical plausibility and interpretability. This chapter reviews the current approaches and discusses ways forward to develop new DL methods for the Earth sciences in all these directions.
In recent years, with the rapid development of deep learning, deep models have become the mainstream in many research branches and application fields of machine learning. As one of the most important targets in remote sensing data, built-up area detection is an important practical field of machine learning. To demonstrate the capabilities of deep models, a large dataset was collected and labeled from data based on GaoFen-2 remote sensing satellite. In the following sections, several deep learning methods will be discussed and experiment on the dataset, including DSCNN, LMB-CNN and a FCN model based on LMB-CNN. It can be seen from the experimental results that the deep models bring great performance improvement compared with traditional algorithms.
Proliferation of high-resolution remote sensing sensors along with the recent advancements in artificial intelligent attract much attention of remote sensing community for the development of technologies for rapid monitoring of environmental and climate events. Such potentials in remote sensing applications is crucial in timely detection and management of frequently occurring natural disasters such as landslides, floods etc. to minimize the loss of human lives as well as economy. In the existing rapid flood mapping systems, there is little or no attention has been given to the mapping of inundation depths which is extremely vital in rescue operation as well as mitigation approaches. Therefore, this study proposes a methodology to rapidly map inundation extents along with inundation depths from high resolution satellite images using convolutional neural networks (CNNs).
The networks were trained and tested using ∼1300 km2 of SPOT-6 and SPOT-7 satellite images (1.5 m resolution) observed during several flood events occurred in Japan from year 2015–2019. Several neural network architectures have been tested in the study. Among them only 02 networks have been selected for the discussion of this proceeding. Obtained results revealed that the networks were achieved very competitive accuracy in flood extend mapping as well as depth estimation. Moreover, it has been found that the proposed inundation depth estimation algorithm is capable of estimating inundation depths relatively well with a speed of 0.4 s/km2 (±0.05) for the tested ∼76 km2 area.
Ensuring access to safe, affordable, and non-polluting energy sources is an important goal for sustainable development. One such source is residual biomass, which can be used for the production of biogas through anaerobic digestion methods. To aid the adoption of green energy production, this chapter developed and implemented an open-data and remote sensing methodology for the automatic detection of crops, which should enable the corresponding estimation of their residual biomass potential through a fuzzy formulation. This chapter explores the use of neural networks jointly with RGB images taken by the Sentinel 2 satellite over different regions in Colombia. As a result, convolutional neural networks achieved a validation accuracy in the detection of crops of 97.7%, generating reliable knowledge for estimating their fuzzy potential for residual biomass production.
This chapter introduces a novel approach to tree detection by fusing LiDAR (Light Detection and Ranging) and RGB imagery, leveraging Ordered Weighted Averaging (OWA) aggregation operators to improve image fusing. It focuses on enhancing tree detection and classification by combining LiDAR’s structural data with the spectral details from RGB images. The fusion methodology aims to optimize information retrieval, employing image segmentation and advanced classification techniques. The effectiveness of this method is demonstrated on the PNOA dataset, highlighting its potential for supporting forest management.
Aiming at the weakness of the high computational cost of correlation-based matching method, a fast matching algorithm is presented in this paper. A coarse-to-fine strategy is adopted in the new algorithm, i.e., the circle template is used in matching area to complete the coarse matching such that the matching windows can be determined firstly and then the whole template is used to decide the final correct window. This algorithm has been applied to match the remote sensing images successfully. The simulation results show that the new algorithm can not only match images correctly, but also decrease the computational cost greatly. Compared with the correlation-based matching method, the computational cost of the new algorithm is only its 10%.
A fusion algorithm of noised remote sensing images based on steerable filters is presented in this paper. A quadrature pair that consists of a steerable filter and its Hilbert transform is used for analyzing the dominant orientation and the local energy in images. By using this algorithm, the original remote sensing images can be denoised, and then their features can be fused successfully. The simulation results indicate that this new remote sensing image fusion algorithm performs well in denoising, preserving, and enhancing edge or texture information. Therefore its comprehensive visual effect is very good.
The purpose of this work is to study the land cover and land types of Nanji Island. A scene IKONOS image was taken to classify the land types including village, farmland, shrubbery, meadow, reservoir, sands and so on. Then several models were built base on fractal theory to analyze the land types. Condition of the land cover and land use was analyzed at three aspects as following: 1) effects of patch area; 2) fractal characters of land types; 3) test of difference of fractal character between every two land types. The results show that the values of D of meadow and shrubbery are higher, and those of farmland and village are smaller, and that the fractal characters are determined by the degree of interferes of human activities.
Ants, bees and other social insects deposit pheromone (a type of chemical) in order to communicate between the members of their community. Pheromone that causes clumping or clustering behavior in a species and brings individuals into a closer proximity is called aggregation pheromone. This article presents a novel method for change detection in remotely sensed images considering the aggregation behavior of ants. Change detection is viewed as a segmentation problem where changed and unchanged regions are segmented out via clustering. At each location of data point, representing a pixel, an ant is placed; and the ants are allowed to move in the search space to find out the points with higher pheromone density. The movement of an ant is governed by the amount of pheromone deposited at different points of the search space. More the deposited pheromone, more is the aggregation of ants. This leads to the formation of homogenous groups of data. Evaluation on two multitemporal remote sensing images establishes the effectiveness of the proposed algorithm over an existing thresholding algorithm.
The present article proposes a wavelet-neuro-fuzzy (WNF) system for classification of land covers of remote sensing images. This classifier incorporates a new architecture for neuro-fuzzy (NF) system that expand the input space of conventional NF (CNF) systems. The performance of this new NF classifier is compared with the CNF and the conventional multi-layer perceptron (MLP) with original multispectral features of remote sensing images. Experimental study demonstrated the superiority of this NF classifier. Incorporation of wavelet features into this classifier improved its performance. Particularly, with biorthogonal3.3 wavelet the proposed NF classifier outperformed all others. Results are evaluated qualitatively and quantitatively.
Discrimination between hazardous materials in the environment and ambient constituents is a fundamental problem in environmental sensing. The ubiquity of naturally occurring bacteria, plant pollen, fungi, and other airborne materials makes the task of sensing for biological warfare (BW) agents particularly challenging. The spectroscopic properties of the chemical warfare (CW) agents in the long wavelength infrared (LWIR) region are important physical properties that have been successfully exploited for environmental sensing. However, in the case of BW agents, the LWIR region affords less distinction between hazardous and ambient materials. Recent studies of the THz spectroscopic properties of biological agent simulants, particularly bacterial spores, have yielded interesting and potentially useful spectral signatures of these materials. It is anticipated that with the advent of new THz sources and detectors, a novel environmental sensor could be designed that exploits the peculiar spectral properties of the biological materials. We will present data on the molecular spectroscopy of several CW agents and simulants as well as some THz spectroscopy of the BW agent simulants that we have studied to date, and discuss the prospectus with regard to detection probabilities through the application of sensor system modeling.
The goal of multi-source remote sensing image fusion is to obtain a high-resolution multispectral image which combines the spectral characteristic of the low-resolution data with the spatial resolution of the panchromatic image. In this paper, methods using nonsubsampled contourlet transform (NSCT) for fusing multispectral low-resolution images with a more highly resolved panchromatic image are described. All the input images are decomposed firstly with NSCT. Then the decomposition coefficients on the different scale are combined using substitution or comparison rule, and the fused image is obtained by taking the corresponding inverse NSCT of the fused coefficients. The spatial and spectral effects are evaluated by qualitative and quantitative measures and the results are compared with those of existing discrete wavelet transform (DWT). The results show that the new method can keep better spatial resolution of the panchromatic images, and better spectral effect of the multispectral images. And the results give some guidance on how to control over how much spatial detail or spectral information should be retained.
An innovative passive standoff system for the detection of chemical/biological agents is described. The spectral, temporal and spatial resolution of the data collected are all adjustable in real time, making it possible to keep the tradeoff between the sensor operating parameters at optimum at all times. The instrument contains no macro-scale moving parts and is therefore an excellent candidate for the development of a robust, compact, lightweight and low-power-consumption sensor. The design can also serve as a basis for a wide variety of spectral instruments operating in the visible, NIR, MWIR, and LWIR to be used for surveillance, process control, and biomedical applications.
To address the problem of sources and sinks of atmospheric CO2, measurements are needed on a global scale. Satellite instruments show promise, but typically measure the total column. Since sources and sinks at the surface represent a small perturbation to the total column, a precision of better than 1% is required. No species has ever been measured from space at this level. Over the last three years, we have developed a small instrument based upon a Fabry-Perot interferometer that is highly sensitive to atmospheric CO2. We have tested this instrument in a ground based configuration and from aircraft platforms simulating operation from a satellite. The instrument is characterized by high signal to noise ratio, fast response and great specificity. We have performed simulations and instrument designs for systems to detect, H2O, CO, 13CO2, CH4, CH2O, NH3, SO2, N2O, NO2, and O3. The high resolution and throughput, and small size of this instrument make it adaptable to many other atmospheric species. We present results and discuss ways this instrument can be used for ground, aircraft or space based surveillance and the detection of pollutants, toxics and industrial effluents in a variety of scenarios including battlefields, industrial monitoring, or pollution transport.
Heavy loads of aerosols in the air have considerable health effects in individuals who suffer from chronic breathing difficulties. This problem is more acute in the Middle-East, where dust storms in winter and spring transverse from the neighboring deserts into dense populated areas. Discrimination between the dust types and association with their source can assist in assessment of the expected health effects. A method is introduced to characterize the properties of dense dust clouds with passive IR spectral measurements. First, we introduce a model based on the solution of the appropriate radiative transfer equations. Model predictions are presented and discussed. Actual field measurements of silicone-oil aerosol clouds with an IR spectro-radiometer are analyzed and compared with the theoretical model predictions. Silicone-oil aerosol clouds have been used instead of dust in our research, since they are composed of one compound in the form of spherical droplets and their release is easily controlled and repetitive. Both the theoretical model and the experimental results clearly show that discrimination between different dust types using IR spectral measurements is feasible. The dependence of this technique on measurement conditions, its limitations, and the future work needed for its practical application of this technique is discussed.
We have developed a hyperspectral deconvolution algorithm that sharpens the spectral dimension in addition to the more usual across-track and along-track dimensions. Using an individual threedimensional model for each pixel's point spread function, the algorithm iteratively applies maximum likelihood criteria to reveal previously hidden features in the spatial and spectral dimensions. Of necessity, our solution is adaptive to unreported across-track and along-track vibrations with amplitudes smaller than the ground sampling distance. We sense and correct these vibrations using a combination of maximum likelihood deconvolution and gradient descent registration that maximizes statistical correlations over many bands. Test panels in real hyperspectral imagery show significant improvement when locations are corrected. Tests on simulated imagery show that the precision of relative corrected positions improves by about a factor of two.
The Day/Night Band (DNB) low-light visible sensor, mounted on the Suomi National Polar-orbiting Partnership (S-NPP) satellite, can measure visible radiances from the earth and atmosphere (solar/lunar reflection, natural/anthropogenic nighttime light emissions) during both the day and night. In particular, it has achieved unprecedented nighttime lowlight-level imaging with its accurate radiometric calibration and splendid spatio-temporal resolution. Based on the superior characteristics of DNB, a multi-channel threshold algorithm combining DNB with other VIIRS channels was proposed to monitor nighttime fog/low stratus. Through a gradual separation of underlying surface (land, vegetation, water bodies), snow, and medium/high clouds, a fog/low stratus region could ultimately be extracted by the algorithm. Algorithmic feasibility then was verified by a typical case of heavy fog/low stratus in China, 2012. The experimental results demonstrate that the outcomes of the algorithm approximately coincide with the ground measured results.
Machine learning (ML) approaches as part of the artificial intelligence domain are becoming increasingly important in multispectral and hyperspectral remote sensing analysis. This is due to the fact that there is a significant increase in the quality and quantity of the remote sensing sensors that produce data of higher spatial and spectral resolutions. With higher resolutions, more information can be extracted from the data, which require more complex and sophisticated techniques compared to the traditional approaches of data analysis. Machine learning approaches are able to analyse remote sensing (RS) data more effectively and give higher classification accuracy. This review will discuss and demonstrate some applications of machine learning techniques in the processing of multispectral and hyperspectral remote sensing data. Future recommendations will also be given to highlight the way forward in the use of machine learning approaches in optical remote sensing data analysis.
UAV (Unmanned Aerial Vehicle) remote sensing technology has been increasingly used to support spatial data acquisition and land management project verification. Currently, in land consolidation field, UAV is mainly applied to acquire high resolution images, which cannot provide support to the whole process of developing projects including site selection, survey, design, implementation, supervision and acceptance. Since the project areas are usually small and dispersed, this paper discusses a new technical development of UAV remote sensing technology to support UAV applications in land consolidation. The new technical development includes spatial data acquisition and rapid processing methods as well as the procedure of large-scale production. At the same time, some key technologies are studied. In order to get the image with high resolution, low altitude aerial route planning technology is studied, taking the regional shape and flight control into account. To improve the image processing efficiency, the CUDA (computer unified device architecture) parallel algorithm is used to make best use of GPU. Besides, 3D point cloud is produced by the dense matching algorithm, which is used to build 3D landscape of the land consolidation area, through which we can make land consolidation plan in a real 3D world, calculate cut-fill earthwork volumes, estimate engineering quantity and cost. By using the dense matching cloud points and the orthophoto maps, some types of natural elements are extracted, such as farmland, water, road, village etc., by image classification and recognition technology, then the current image and the past image are compared to find the land use changes, monitoring the progress and quality of the project. Finally, experiments are performed and the results demonstrate that the proposed method can improve the efficiency of land consolidation projects significantly.