Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Fast and robust vehicle recognition from remote sensing images (RSIs) has excellent economic analysis, emergency management, and traffic surveillance applications. Additionally, vehicle density and location data are vital for intelligent transportation systems. However, correct and robust vehicle recognition in RSIs has been complex. Conventional vehicle recognition approaches depend on handcrafted extracted features in sliding windows with distinct scales. Recently, the convolutional neural network can be executed for aerial image object recognition, and it has accomplished promising outcomes. This research projects an automatic vehicle detection and classification utilizing an imperialist competitive algorithm with a deep convolutional neural network (VDC-ICADCNN) technique. The primary purpose of the VDC-ICADCNN technique is to develop the RSI and apply deep learning (DL) models for the recognition and identification of vehicles. Three main procedures were involved in the presented VDC-ICADCNN technique. At the primary stage, the VDC-ICADCNN technique employs the EfficientNetB7 approach for the feature extractor. Then, the hyperparameter fine-tuning of the EfficientNet approach takes place utilizing ICA, which aids in attaining improved performance in the classification process. The VDC-ICADCNN technique utilizes a variational autoencoder (VAE) model for the vehicle recognition method. Extensive experiments can be implemented to establish the superior solution of the VDC-ICADCNN technique. The obtained outcomes of the VDC-ICADCNN technique highlighted a superior accuracy value of 96.77% and 98.59% with other DL approaches under Vehicle Detection in Aerial Imagery (VEDAI) and Potsdam datasets.
Advances in Unmanned Aerial Vehicles (UAVs), otherwise recognized as drones, have tremendous promise in improving the wide-ranging applications of the Internet of Things (IoT). UAV image classification using deep learning (DL) is an amalgamation to modernize data analysis, collection, and decision-making in a variety of sectors. IoT devices collect information in real time, while remote sensing captures data afar without direct contact. UAVs equipped with sensors offer high-quality images for classification tasks. DL techniques, especially the convolutional neural networks (CNNs), analyze data streams, extracting complicated features for the accurate classification of objects or environmental features. This synergy enables applications including urban planning and precision agriculture, fostering smarter disaster response, decision support systems, and efficient resource management. This paper introduces a novel Pyramid Channel-based Feature Attention Network with an Ensemble Learning-based UAV Image Classification (PCFAN-ELUAVIC) technique in an assisted remote sensing environment. The PCFAN-ELUAVIC technique begins with the contrast enhancement of the UAV images using the CLAHE technique. Following that, the feature vectors are derived by the use of the PCFAN model. Meanwhile, the hyperparameter tuning procedure is executed by the inclusion of a vortex search algorithm (VSA). For image classification, the PCFAN-ELUAVIC technique comprises an ensemble of three classifiers like long short-term memory (LSTM), graph convolutional networks (GCNs), and Hermite neural network (HNN). To exhibit the improved detection results of the PCFAN-ELUAVIC system, an extensive range of experiments are carried out. The experimental values confirmed the enhanced performance of the PCFAN-ELUAVIC model when compared to other techniques.
Manual field-based population census data collection method is slow and expensive, especially for refugee management situations where more frequent censuses are necessary. This study aims to explore the approaches of population estimation of Rohingya migrants using remote sensing and machine learning. Two different approaches of population estimation viz., (i) data-driven approach and (ii) satellite image-driven approach have been explored. A total of 11 machine learning models including Artificial Neural Network (ANN) are applied for both approaches. It is found that, in situations where the surface population distribution is unknown, a smaller satellite image grid cell length is required. For data-driven approach, ANN model is placed fourth, Linear Regression model performed the worst and Gradient Boosting model performed the best. For satellite image-driven approach, ANN model performed the best while Ada Boost model has the worst performance. Gradient Boosting model can be considered as a suitable model to be applied for both the approaches.
Crop pests and diseases are treated as one of the main factors affecting food production and security. An accurate detection and corresponding precision management to reduce the spread of crop diseases in time and space is an important scientific issue in crop disease control tasks. On the one hand, the development of remote sensing technology provides higher-quality data (high spectral/spatial resolution) for crop disease monitoring. On the other hand, deep learning/machine learning algorithms also provide novel insights for crop disease detection. In this paper, a comprehensive review was conducted to demonstrate various remote sensing platforms (e.g. ground-based, low-attitude and spaceborne scales) and popular sensors (e.g. RGB, multispectral and hyperspectral sensors). In addition, conventional machine learning and deep learning algorithms applied for crop disease monitoring are also reviewed. In the end, considering the crop disease early detection problem which is a challenging problem in this area, self-supervised learning is introduced to motivate future research. It is envisaged that this paper has concluded the recent crop disease monitoring algorithms and provides a novel thought on crop disease early monitoring.
Colombo is the commercial capital of Sri Lanka, with a high population, buildings, and vehicles. Therefore, it is vital to observe the spatial distribution of vegetation types and changes in the green cover of Colombo city to identify priority areas to improve green cover. This study was carried out to estimate the changes in green cover in Colombo Municipal Area (CMA) and its postal zones over 10 years using remote sensing techniques. The green cover was categorized into trees, shrubs, Playgrounds (PG)/Grasslands, wetlands, rooftops, and Ipomoea cover. Accordingly, total green cover increased up to 26.17% from 22.36% during the period of 2012–2022. During the past decade, except for the PG/Grasslands, all other vegetation types have been reduced. Five of the 15 postal zones in the CMA’s green cover have decreased during the past 10 years, including Colombo 04, 05, 06, 09, and 13. The highest green cover was recorded in Colombo 07, while the lowest green cover was present in Colombo 11. The outcome of the study emphasizes that CMA is moving towards greening and sustainability even with the expansion of built-up areas and urban populations.
The growing trend of urban population has led to an increase in worn-out urban textures. Although various policies have been proposed to organize these textures, past data such as detailed and comprehensive plans in Iranian cities have afforded to achieve desirable results. For this purpose, Landsat satellite images taken on December 8, 2019 were used. Hence, this study tried to provide the latest method of information on urban worn-out textures with a new method and identify areas prone to becoming dysfunctional textures. For this purpose, Landsat 8 satellite images (December 2020) have been used. In this regard, in order to analyze the ENVI environment, two methods have been used: (1) The command of emissivity; (2) the calculation of the normalized and emissive vegetation cover index NDVI (Esfandiari, Darabad Fariba, Raoof Mostafazadeh, Amirhesam Pasban, and Behruoz Nezafat Takleh. 2022. “Integrating Terrain and Vegetation Indices to Estimate and Identify the Soil Erosion Risk Amoughin Watershed, Ardabil.” Journal of Spatial Analysis and Environmental Hazarts, 9(1): 77–96.) represents the reflection of solar energy from the earth’s surface, which indicates the types of vegetation conditions. To calculate the temperature of the city surface, LST (earth surface temperature in remote sensing refers to the heat measured by a radiometer in a momentary field of view) (Pirnazar et al., 2018). as well as to express the consequences of worn-out textures from the Driving forces-Pressure-State Impact-Response (DPSIR) model. Also, the best–worst method (BWM) is one of the newest and most effective multi-criteria decision-making techniques, which is used to weigh the factors and decision criteria and determine the priority of decisions (Sadeghi Darvaze et al., 2019) has been used to express the preference of the solutions for the organization of worn-out textures. Finally, the geographic information system (GIS) which refers to a set of hardware, software, geographic data and human resources that is used to collect, analyze and apply all geographic information (Mirzapour, 2019) has been used to express numerical calculations and display maps. The results of examining the surface temperature of the land using two emissivity and emissivity commands (Figs. 2 and 3; Tables 9 and 10) showed that the surface of Zanjan city is divided into five classes in terms of worn-out conditions, in which the first class with the lowest recorded temperature of the graph was the most worn-out part of the city and corresponded to the initial cores of the city. It includes 9.6% of the city’s area, but the second class consisting of 10% of the city’s worn-out area was ranked second. Also, the results of the BWM method refer to citizen participation, regeneration with an economy-oriented approach, accurate identification of urban worn-out textures and exposed areas of the city (as a high authority in organizing the urban worn-out textures of Zanjan).
Novel View Synthesis (NVS) is an important task for 3D interpretation in remote sensing scenes, which also benefits vicinagearth security by enhancing situational awareness capabilities. Recently, NVS methods based on Neural Radiance Fields (NeRFs) have attracted increasing attention for self-supervised training and highly photo-realistic synthesis results. However, it is still challenging to synthesize novel view images in remote sensing scenes, given the complexity of land covers and the sparsity of input multi-view images. In this paper, we propose a novel NVS method named FReSNeRF, which combines Image-Based Rendering (IBR) and NeRF to achieve high-quality results in remote sensing scenes with sparse input. We effectively solve the degradation problem by adopting the sampling space annealing method. Additionally, we introduce depth smoothness based on the segmentation mask to constrain the scene geometry. Experiments on multiple scenes show the superiority of our proposed FReSNeRF over other methods.
Proliferation of high-resolution remote sensing sensors along with the recent advancements in artificial intelligent attract much attention of remote sensing community for the development of technologies for rapid monitoring of environmental and climate events. Such potentials in remote sensing applications is crucial in timely detection and management of frequently occurring natural disasters such as landslides, floods etc. to minimize the loss of human lives as well as economy. In the existing rapid flood mapping systems, there is little or no attention has been given to the mapping of inundation depths which is extremely vital in rescue operation as well as mitigation approaches. Therefore, this study proposes a methodology to rapidly map inundation extents along with inundation depths from high resolution satellite images using convolutional neural networks (CNNs).
The networks were trained and tested using ∼1300 km2 of SPOT-6 and SPOT-7 satellite images (1.5 m resolution) observed during several flood events occurred in Japan from year 2015–2019. Several neural network architectures have been tested in the study. Among them only 02 networks have been selected for the discussion of this proceeding. Obtained results revealed that the networks were achieved very competitive accuracy in flood extend mapping as well as depth estimation. Moreover, it has been found that the proposed inundation depth estimation algorithm is capable of estimating inundation depths relatively well with a speed of 0.4 s/km2 (±0.05) for the tested ∼76 km2 area.
Machine learning (ML) approaches as part of the artificial intelligence domain are becoming increasingly important in multispectral and hyperspectral remote sensing analysis. This is due to the fact that there is a significant increase in the quality and quantity of the remote sensing sensors that produce data of higher spatial and spectral resolutions. With higher resolutions, more information can be extracted from the data, which require more complex and sophisticated techniques compared to the traditional approaches of data analysis. Machine learning approaches are able to analyse remote sensing (RS) data more effectively and give higher classification accuracy. This review will discuss and demonstrate some applications of machine learning techniques in the processing of multispectral and hyperspectral remote sensing data. Future recommendations will also be given to highlight the way forward in the use of machine learning approaches in optical remote sensing data analysis.