Written by leaders in the field of remote sensing information processing, this book covers the frontiers of remote sensors, especially with effective algorithms for signal/image processing and pattern recognition with remote sensing data. Sensor and data fusion issues, SAR images, hyperspectral images, and related special topics are also examined. Techniques making use of neural networks, wavelet transforms, and knowledge-based systems are emphasized. A special set of three chapters is devoted to seismic analysis and discrimination.
In summary, the book provides an authoritative treatment of major topics in remote sensing information processing and defines new frontiers for these areas.
https://doi.org/10.1142/9789812796752_fmatter
The following sections are included:
https://doi.org/10.1142/9789812796752_0001
Following a brief review of developments in optical and radar sensor characteristics, the implications of improved sensor properties for thematic mapping from optical data are discussed. Particular attention is given to the likely poor estimates of second order class statistics obtained with practical training set sizes; methods used to overcome that problem are highlighted. Block diagonalisation of the class covariance matrix is suggested as a viable, practical process to adopt in practice.
Noting the relative maturity of the current generation of sensors, the chapter concludes by commenting on the desirable characteristics of an information system for operational remote sensing purposes, noting especially the need to focus on procedures that manipulate knowledge, rather than data.
https://doi.org/10.1142/9789812796752_0002
One unique feature in the remote sensing problems is that a significant amount of data are available, from which desired information must be extracted. Transform methods offer effective procedures to derive the most significant information for further processing or human interpretation and to extract important features for pattern classificaiton.
In this chapter a survey of the use of major transforms in remote sensing is presented. These transforms have significant effects on data reduction and compression and on pattern recognition as features derived from orthogonal or related transforms tend to be very effective for classification. After the introduction, we will examine the PCA and discriminant analysis transform, empirical orthogonal functions (EOF), component analysis and an independent component analysis (ICA) algorithm, followed by concluding remarks.
https://doi.org/10.1142/9789812796752_0003
Automatic content extraction, classification and content-based retrieval are highly desired goals in intelligent remote sensing databases. Pixel level processing has been the common choice for both academic and commercial systems. We extend the modeling of remotely sensed imagery to three levels: Pixel level, region level and scene level. Pixel level features are generated using unsupervised clustering of spectral values, texture features and ancillary data like digital elevation models. Region level features include shape information and statistics of pixel level feature values. Scene level features include statistics and spatial relationships of regions. This chapter describes our work on developing a probabilistic visual grammar to reduce the gap between low-level features and high-level user semantics, and to support complex query scenarios that consist of many regions with different feature characteristics. The visual grammar includes automatic identification of region prototypes and modeling of their spatial relationships. The system learns the prototype regions in an image collection using unsupervised clustering. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds visual grammar models which can also be updated using user relevance feedback. A Bayesian framework is used to automatically classify scenes based on these models. We demonstrate our system with query scenarios that cannot be expressed by traditional region or scene level approaches but where the visual grammar provides accurate classifications and effective retrieval.
https://doi.org/10.1142/9789812796752_0004
Shape analysis has not been considered in remote sensing as extensively as in other pattern recognition applications. However, shapes such as those of geometric patterns in agriculture and irregular boundaries of lakes can be extracted from the remotely sensed imagery even at relatively coarse spatial resolutions. This chapter presents a procedure for efficiently retrieving and representing the shape of objects in remotely sensed imagery using supervised classification, object recognition, and parametric contour tracing. Using the piecewise linear polygonal approximation technique, shape similarity can be compared by means of a computationally efficient metric. Our study was conducted on a time series of radiometric and geometric rectified Landsat Multispectral Scanner (MSS) images and Thematic Mapper (TM) images, covering the scenes containing lakes in the Nebraska Sand Hills region. The results show the effectiveness of our approach in detecting changes in lake shapes, which is potentially useful for specific applications such as the study of the lake change response to short or long term climatic variation and drought monitoring.
https://doi.org/10.1142/9789812796752_0005
This chapter reviews the recent advances in polarization orientation angle estimation and its applications using polarimetric synthetic aperture radar (SAR) data. Polarization orientation shifts are induced by topography slopes in the azimuth direction. Orientation angles can be readily extracted from polarimetric SAR data. Difficulties are frequently encountered in the estimation of orientation angles from polarimetric data. These difficulties will be discussed and the effect of radar wavelength and calibration on the estimation will be investigated. SIR-C and JPL AIRSAR polarimetric SAR data are used for illustration.
https://doi.org/10.1142/9789812796752_0006
This paper introduces an approach to the classification and interpretation of SAR data using complementary polarimetric and interferometric information. Strictly polarimetric and polarimetric interferometric data are first analyzed and classified separately. An unsupervised polarimetric segmentation, based on multivariate Wishart statistics, is applied to one of the separate polarimetric datasets. The use of pertinent polarimetric indicators permits to give an interpretation of each resulting cluster polarimetric properties and to classify the observed scene into three canonical scattering types. The interpretation and the segmentation of an optimized interferometric coherency set leads to the discrimination of different natural media that cannot be achieved with polarimetric data only. Finally, each type of scattering mechanism is processed through an unsupervised statistical interferometric classification procedure merging results from separate studies. The resulting classes show an enhanced description and understanding of the scattering from the different natural media composing the observed scene.
https://doi.org/10.1142/9789812796752_0007
A two-dimensional wavelet transform is a very efficient bandpass filter, which can be used to separate various scales of processes and show their relative phase/location. A feature tracking procedure based on wavelet transform has been developed and used for image processing at NASA Goddard Space Flight Center for the past several years. The two-dimensional Gaussian wavelet has been applied to satellite images for coastal monitoring (e.g. oil spills) and for ice edge and ice floe tracking from synthetic aperture radar (SAR), ocean color, and infrared (IR) data. However, SAR is valuable for feature tracking due to the fine spatial resolution of the data, but its less than daily coverage may be a serious problem for some ocean applications. A similar technique of wavelet analysis for scatterometer and radiometer data has been developed to obtain daily sea ice drift information in the Arctic region. This technique provides improved spatial coverage and better temporal resolution over techniques utilizing data from SAR. From low earth orbits, ocean surface feature tracking analyses have always been based on data from a single orbital sensor collected over the revisit interval of a single satellite. For the first time, ocean surface layer currents have been derived by wavelet feature tracking of ocean color data from different sensors on different satellites. Ocean color data can be used as a tracer for measuring ocean surface layer currents, because the ocean color signal comprises information from a deeper water depth than surface signatures. The results of feature tracking from these multiple sensors demonstrate that wavelet analysis of satellite data is a very useful tool for image processing.
https://doi.org/10.1142/9789812796752_0008
Wavelet filters have been used for despeckling of SAR images using a variety of different methods. This chapter discusses many of those previous methods and provides a framework for better understanding them and their relationship to each other. We also present 2 different techniques that use spatial correlation in different ways to perform the filtering. These techniques and others like them show promise in solving the problem of despeckling of SAR images quickly and accurately under a variety of different imaging scenarios.
https://doi.org/10.1142/9789812796752_0009
In this work, a wavelet representation of multispectral images is presented. The representation is based on a multiresolution extension of the First Fundamental Form that accesses gradient information of vector-valued images. With the extension, multiscale edge information of multispectral images is extracted. Moreover, a wavelet representation is obtained that, after inverse transformation, accumulates all edge information in a single greylevel image. In this work, a redundant wavelet representation is presented using dyadic wavelet frames. It is then extended towards orthogonal wavelet bases using the Discrete Wavelet Transformation (DWT). The representation is shown to be a natural framework for image fusion. An algorithm is presented for fusion and merging of multispectral images. The concept is successfully applied to the problem of multispectral and hyperspectral image merging.
https://doi.org/10.1142/9789812796752_0010
Automated algorithms are being developed to assist U.S. Navy operational weather assessment and forecasting. Using supervised machine learning techniques, patterns and relationships are discovered in various satellite and meteorological data from which relevant classification and parameter estimation algorithms can be developed. Three applications of these techniques are discussed. A Geostationary Operational Environmental Satellite (GOES) image cloud type classifier is developed using expert-labeled data, specific image characteristic features and a 1-nearest neighbor classification routine. A tropical cyclone intensity estimation algorithm is developed using brightness temperatures and derived features of Special Sensor Microwave Imager (SSM/I) data, best-track intensity (ground truth) and a K-nearest neighbor routine. Knowledge Discovery from Databases (KDD) methodology is employed to develop algorithms to estimate cloud ceiling height at remote locations. Developed over a two-year period, the database consists of hourly location-specific records of satellite data, numerical weather prediction data, and ground truth (METAR) cloud ceiling height observations. Data mining techniques are applied to produce cloud ceiling height estimation algorithms. All of the algorithms mentioned above exist at various stages of development and each has shown promising potential for operational use.
https://doi.org/10.1142/9789812796752_0011
Microwave remote sensing instruments such as radiometers and scatterometers have proven themselves effective in a variety of Earth Science studies. The resolution of these sensors, while adequate for many applications, is a limiting factor to their application in other studies. As a result, there is a strong interest in developing ground processing methods which can enhance the spatial resolution of the data. A number of resolution enhancement algorithms have been developed based on inverse filtering and irregular sampling reconstruction. This Chapter discusses the use of resolution enhancement and reconstruction algorithms in microwave remote sensing. While the focus is on microwave instruments, the techniques and algorithms considered are applicable to a variety of sensors, including those not originally designed for imaging.
https://doi.org/10.1142/9789812796752_0012
Advanced classification techniques for a regular updating of land-cover maps are proposed that are based on the use of multitemporal remote-sensing images. Such techniques are developed within the framework of partially supervised approaches, which are able to address the updating problem under the realistic but critical constraint that, for the image to be classified (i.e., the most recent of the considered multitemporal data set), no ground truth information is available. Two different approaches are considered. The first approach is based on an independent analysis of the information contained in each single image of the considered multitemporal series; the second approach exploits the temporal correlation between pairs of images acquired at different times in the classification process. In the context of such approaches, both parametric and non-parametric classifiers are considered. In addition, in order to design a reliable and accurate classification system, multiple classifier architectures composed of partially supervised algorithms are investigated. Experimental results obtained on a real multitemporal data set confirm the effectiveness of the proposed approaches.
https://doi.org/10.1142/9789812796752_0013
A classifier based on the k-NN rule is known as the one that offers a very good performance. Learning of such kind of classifiers consists in determination the value of k. Some modification of the standard k-NN rule may lead to the improvement of the classification quality. The relatively new k Nearest Centroid Neighbor (k-NCN) decision rule uses an interesting concept of surrounding neighborhood, that is such a neighborhood, which takes into account not only the proximity of neighbors, but also their spatial location. Neighbors should be located not only close to a query sample, but also possibly around it in the space. In this chapter we present our decision rule called k Near Surrounding Neighbors (k-NSN), which "improves" the neighborhood used in k-NCN with respect to both described aspects. Moreover, we present a voting technique which finds several k parameters for k-NN, k-NCN and k-NSN rules learnt from random partitions of the training set and utilizes them in an ensemble of classifiers. As opposed to most ensemble methods, our algorithms require moderate computational increase in relation to the base classifiers, and even almost negligible computation increase in the voting k-NN case. We test the aforementioned methods on a remote sensing dataset (already used in several experiments) and obtain results which show attractiveness of the presented concepts in applications where prediction accuracy is of primal importance. The main disadvantage of the k-NN decision rule and its modified versions is a necessity of keeping the whole training set, as the reference set, in the computer memory during a classification phase. Numerous procedures, which have been already proposed for reference set reduction, concern the 1-NN rule. Although most proposed methods were originally devised for the 1-NN rule, there is no obstruction to use the received reduced sets with k-NN classifiers. It is also possible to reclassify the original reference set by applying the k-NN rule, standard or modified, and then to use the simple 1-NN rule with the reclassified set. The effectiveness of these approaches will be studied in relation to four different algorithms of reference set size reduction.
https://doi.org/10.1142/9789812796752_0014
Processing, analysis and transmission of the remote sensing data require large amounts of computation and storage space. Both the Principal Component Analysis (PCA) and Independent Component Analysis (ICA) are useful to reduce the remote sensing data size. Both are globally optimum according to some criteria. This chapter introduces the Gauss-Markov random field, which is assumed to be the model of observed terrain, and the maximum a posteriori (MAP) estimation for remote sensing data compression and unsupervised classification. The PCA with MRF method and clustering algorithms are applied to an AVIRIS data (Hyperspectral imagery data). By comparing with the result of PCA/ICA and k-mean algorithm, remote sensing data compressed with PCA and MRF can be more easily classified by the unsupervised classification algorithm. However the popular FCM (Fuzzy-c-means) algorithm does not perform significantly better in the remote sensing data than the k-mean method though it needs larger amount of computation.
https://doi.org/10.1142/9789812796752_0015
Fusing information from sensors with very different phenomenology is an attractive and challenging task for automatic target acquisition (ATA) systems. Sensor fusion improves results when correct target detections correlate between sensors while false alarms do not (due to different properties of targets such as shape and signature of targets). In this paper, we present a series of algorithms for detecting and segmenting targets from their background in passive millimeter wave (PMMW) and laser radar (LADAR) data. PMMW sensors provide a consistent signature for metallic targets, however their angular resolution is too limited to support further target classification. LADAR sensors provide the ATA systems with high angular resolution and 3–dimensional geometric shape information supporting accurate target identification. However, the shape-based segmentation can give very high probability of false alarm under structured clutter scenarios. Sensor fusion techniques are applied with the goal of maintaining high probability of detection while decreasing the false alarm rate.
https://doi.org/10.1142/9789812796752_0016
In this chapter, we deal with the land cover mapping using neural networks and introduce some methods to improve the classification accuracy. These methods are competitive neural networks, Jeffries-Matusita distance, and textures. Competitive neural networks introduced here are Learning Vector Quantization and Self Organizing Map which are powerful tools for categorization. Jeffries-Matusita distance and textures are also adopted for the feature selection or extraction from pixels of the target image which are an important factor for the classification accuracy. Finally, we will show some simulation results to confirm the effectiveness of these methods.
https://doi.org/10.1142/9789812796752_0017
The larger availability of high resolution remotely sensed data, provided by novel aircraft and space sensors, offers new perspective to image processing techniques, but it introduces also the need for operational tools in order to completely exploit the potentialities of these data. These tools can be useful in many applicative contexts, in particular technological network surveillance, which involves specific requirements, such as accuracy in object recognition and positioning together with minimal demand of ground truth. The application presented deals with the recognition of features of interest for the surveillance of power transmission lines using IKONOS imagery. We proposed a methodology in which multi-scale and neural techniques are synergically combined to identify features at different scales and to fuse them for class discrimination. As seen in our experimental context, the results obtained on a pilot area in Northern Italy proved that the combination of multi-window feature extraction and neural soft classification produced a robust and flexible model that can act as a classifier of objects that vary in shape, size and structure.
https://doi.org/10.1142/9789812796752_0018
The analysis of remote-sensing images has proved to be a powerful tool for monitoring the state of the Earth surface in several applications. Consequently, the availability of accurate and reliable algorithms for the detection of changes in remote sensing images is an important issue of growing interest. After a brief survey of previous work in change detection (including unsupervised, supervised and partially supervised approaches), two new unsupervised techniques, recently proposed by the authors, are described. The first is based on a modification to a thresholding algorithm originally proposed in the context of computer vision applications. The second involves the combination of the Fisher transform with the Expectation-Maximization algorithm. Experimental results on both simulated and real data sets and a comparison with another unsupervised approach are presented and discussed.
https://doi.org/10.1142/9789812796752_0019
The reflection seismic method is an instrument for remote detection that uses traveling waves to find the structure of an inaccessible body. The model used consists of flat horizontally layers subjected to seismic compressional waves at normal incidence. Both the source and receiver are buried below the surface with the receiver below the source. Dual attributes of the seismic wavefield are particle velocity and pressure. A receiver to measure these dual attributes is made up of two sensors. One sensor is a geophone, which measures particle velocity. The other sensor is a hydrophone, which measures pressure. Einstein deconvolution makes use of these dual attributes. The method is called Einstein deconvolution because the mathematics involved is similar to the mathematics of the special theory of relativity. Einstein deconvolution consists of two steps. The first step is the use of the d'Alembert equations to convert the received particle-velocity signal and the received pressure signal into the downgoing and upgoing waves. This step requires knowledge of the acoustic impedance of the material at the receiver. The second step in the Einstein deconvolution process is to deconvolve the upgoing wave by the downgoing wave. This operation removes the unknown source signature (which may or may not be minimum-phase) as well as the reverberations and ghosts due to the layers above the receiver. The output of the Einstein deconvolution process is the unit-impulse reflection response of the layers below the receiver. This unit-impulse reflection response can be subjected to dynamic deconvolution to yield the individual reflection coefficients.
https://doi.org/10.1142/9789812796752_0020
We present a new noise-reduction method, named Edge-Preserving Smoothing (EPS), to reduce noise in 3-D seismic data. The EPS method simply attempts to suppress random noise and some acquisition artifacts in seismic data while preserving sharp boundaries or edges; these edges often correspond to important geological features, such as faults, fractures and channels. By applying EPS as a pre-processing step before running algorithms to detect edges in seismic data, we have obtained seismic edge-detection results with much-improved S/N ratio and resolution.
https://doi.org/10.1142/9789812796752_0021
The objective of seismic depth imaging is to produce a spatially accurate map of the reflectivity below the Earth's surface. Current methods for performing depth imaging require accurate an accurate velocity model in order to place reflectors at their correct locations. Existing techniques to derive the wave velocity can fail to provide this information with the necessary degree of accuracy, especially in areas that are geologically complex.
The inverse scattering series, a multi-dimensional direct inverse procedure, has the potential to perform the task of imaging reflectors at depth without needing to specify the exact velocity. The primary objective of the research described here is to further define the concept and to progress the development of an algorithm to perform the task of imaging in the absence of accurate velocity information. As has been recently reported, the strategy employed involves isolating a subseries of the inverse series with the specific purpose of imaging reflectors in space.
In this paper, analytic and numerical results of an imaging subseries algorithm are further examined. This algorithm is being evaluated with regards to its convergence properties and data requirements.
https://doi.org/10.1142/9789812796752_0022
Two methods for processing time-series of satellite sensor data are presented. The first method is based on an adaptive Savitzky-Golay filter, and the second on non-linear least-squares fits to asymmetric Gaussian model functions. Both methods incorporate qualitative information on cloud contamination from ancillary datasets. The resulting smooth curves are used for extracting phenological parameters related to the growing seasons. The methods are applied to NASA/NOAA Pathfinder AVHRR Land Normalized Difference Vegetation Index (NDVI) data over Africa giving spatially coherent images of phenological parameters such as beginnings and ends of growing seasons, seasonally integrated NDVI, seasonal amplitudes etc. The results indicate that the two methods complement each other and that they may be suitable in different areas depending on the behavior of the NDVI signal.
https://doi.org/10.1142/9789812796752_0023
This paper describes data compression algorithms capable to preserve the scientific quality of remote-sensing data, yet allowing a considerable bandwidth reduction to be achieved. Unlike lossless techniques, by which a moderate a compression ratio (CR) is attainable, due to intrinsic noisiness of the data, and conventional lossy techniques, in which the mean squared error of the decoded data is globally controlled by user, near-lossless methods are capable to locally constrain the maximum error, either absolute or relative, based on the user's requirements. Advanced near-lossless methods rely on differential pulse code modulation (DPCM) schemes, based on either prediction or interpolation. The latter is recommended for lower quality compression (i.e., higher CR), the former for higher-quality, which is the primary concern in remote sensing applications. Experimental results of near-lossless compression of multispectral, hyperspectral, and microwave data from coherent imaging systems, like synthetic aperture radar (SAR), show the advantages of the proposed approach compared to standard lossy techniques.
https://doi.org/10.1142/9789812796752_0024
The use of ground penetrating radar (GPR) array for detecting and localizing buried objects has received considerable attention in recent years in areas such as landmine and unexploded ordnance remediation, utility line mapping, and archaeological discovery. A typical GPR array is implemented by moving a transmitter and receiver along a linear track. At every stop of the system, the transmitter emits a short pulse of electromagnetic energy which interacts with the surrounding medium. Based on observations of scattered fields collected by the array the objective of the problem is to determine if an object is present in the field of view of the array and, furthermore, to localize its position.
From the perspective of image enhancement, we can apply image processing methods such as histogram modifications to improve the quality of GPR imagery. Histogram modification is a family of point operation methods that modify the pixels to enhance the contrast of the images. Using histogram modification methods to process GPR images can enhance the landmine reflected signals, which are usually weaker than the specular ground reflection. Enhanced landmine reflected signals allow better detection and localization. On the other hand, histogram modification inevitably generates noise and some undesirable artifacts in the image. To remove the noise and artifacts, we use median filtering and adaptive filtering techniques to remove the noise, usually in the form of speckle noise, and the artifacts, usually in the form of horizontal streaks. The resulting image then has enhanced landmine reflected signals and an approximate homogeneous background, which allows better detection of buried landmines.
https://doi.org/10.1142/9789812796752_0025
Infra-red (IR) technology is applied in a wide range of application domains, e.g. military, medical, security and others. All objects, live or dead and of any colour, emit infra-red radiation by virtue of their temperature; the exact degree of radiation is determined by the absolute temperature and the thermal characteristics of the material from which it is made. The radiation is present day or night with or without external illumination. Infra-red technology is concerned with the detection and imaging of this emitted infra-red radiation. It helps to visualise objects that cannot be seen by the naked eye. Infra-red imaging is therefore a method for producing an image of the heat emitted from any object's surfaces. A thermogram is a calibrated graphic record of the temperature distribution obtained by thermography.
https://doi.org/10.1142/9789812796752_0026
Hyperspectral sensing technology has advanced in recent years. There are a number of experimental aircraft platforms that routinely collect hyperspectral-imaging (HSI) data both for civilian and military applications. Space borne data have also become a reality. The realization of utility of HSI data requires coordinated activities by both the development and user communities. In this chapter, examples of hyperspectral application scenarios are discussed. The principle focus is terrain characterization and object detection. Additionally, examples of HSI fusion with other sensors, such as synthetic aperture radar (SAR) and panchromatic imagery, will also be shown to demonstrate the effect of sensor fusion in target detections.
https://doi.org/10.1142/9789812796752_bmatter
The following sections are included:
Chi Hau Chen received his Ph.D. in electrical engineering from Purdue University in 1965, MSEE degree from University of Tennessee, Knoxville in 1962 and BSEE degree from National Taiwan University in 1959. He is currently Chancellor Professor and Professor Emeriti of electrical and computer engineering, at the University of Massachusetts Dartmouth, where he has taught since 1968. His research areas are in statistical pattern recognition and signal/image processing with applications to remote sensing, geophysical, underwater acoustics & nondestructive testing problems; as well computer vision for video surveillance; time series analysis; and neural networks.
Dr. Chen has published (edited and authored) 30 books in his areas of research, including a number of books published by World Scientific Publishing. He was Associate Editor for International Journal of Pattern Recognition and Artificial Intelligence from 1986–2008. Since 2008 he has been an Editorial Board Member of Pattern Recognition Journal. Currently he is also the Series Editor in Computer Vision for World Scientific Publishing.
Dr. Chen has been a Fellow of the Institute of Electrical and Electronic Engineers (IEEE) since 1988, a Life Fellow of the IEEE since 2003, Fellow of the International Association of Pattern Recognition (IAPR) since 1996, full member of the Academia NDT International since 200 and on the Fulbright Specialist Program since 2008.