The Global Positioning System (GPS) is a network of satellites, whose original purpose was to provide accurate navigation, guidance, and time transfer to military users. The past decade has also seen rapid concurrent growth in civilian GPS applications, including farming, mining, surveying, marine, and outdoor recreation. One of the most significant of these civilian applications is commercial aviation. A stand-alone civilian user enjoys an accuracy of 100 meters and 300 nanoseconds, 25 meters and 200 nanoseconds, before and after Selective Availability (SA) was turned off. In some applications, high accuracy is required. In this paper, five Neural Networks (NNs) are proposed for acceptable noise reduction of GPS receivers timing data. The paper uses from an actual data collection for evaluating the performance of the methods. An experimental test setup is designed and implemented for this purpose. The obtained experimental results from a Coarse Acquisition (C/A)-code single-frequency GPS receiver strongly support the potential of methods to give high accurate timing. Quality of the obtained results is very good, so that GPS timing RMS error reduce to less than 120 and 40 nanoseconds, with and without SA.
Automatic seizure detection is of great significance in the monitoring and diagnosis of epilepsy. In this study, a novel method is proposed for automatic seizure detection in intracranial electroencephalogram (iEEG) recordings based on kernel collaborative representation (KCR). Firstly, the EEG recordings are divided into 4s epochs, and then wavelet decomposition with five scales is performed. After that, detail signals at scales 3, 4 and 5 are selected to be sparsely coded over the training sets using KCR. In KCR, l2-minimization replaces l1-minimization and the sparse coefficients are computed with regularized least square (RLS), and a kernel function is utilized to improve the separability between seizure and nonseizure signals. The reconstructed residuals of each EEG epoch associated with seizure and nonseizure training samples are compared and EEG epochs are categorized as the class that minimizes the reconstructed residual. At last, a multi-decision rule is applied to obtain the final detection decision. In total, 595 h of iEEG recordings from 21 patients with 87 seizures are employed to evaluate the system. The average sensitivity of 94.41%, specificity of 96.97%, and false detection rate of 0.26/h are achieved. The seizure detection system based on KCR yields both a high sensitivity and a low false detection rate for long-term EEG.
The automatic identification of epileptic electroencephalogram (EEG) signals can give assistance to doctors in diagnosis of epilepsy, and provide the higher security and quality of life for people with epilepsy. Feature extraction of EEG signals determines the performance of the whole recognition system. In this paper, a novel method using the local binary pattern (LBP) based on the wavelet transform (WT) is proposed to characterize the behavior of EEG activities. First, the WT is employed for time–frequency decomposition of EEG signals. After that, the “uniform” LBP operator is carried out on the wavelet-based time–frequency representation. And the generated histogram is regarded as EEG feature vector for the quantification of the textural information of its wavelet coefficients. The LBP features coupled with the support vector machine (SVM) classifier can yield the satisfactory recognition accuracies of 98.88% for interictal and ictal EEG classification and 98.92% for normal, interictal and ictal EEG classification on the publicly available EEG dataset. Moreover, the numerical results on another large size EEG dataset demonstrate that the proposed method can also effectively detect seizure events from multi-channel raw EEG data. Compared with the standard LBP, the “uniform” LBP can obtain the much shorter histogram which greatly reduces the computational burden of classification and enables it to detect ictal EEG signals in real time.
In this paper, we investigate the performance of a Computer Aided Diagnosis (CAD) system for the detection of clustered microcalcifications in mammograms. Our detection algorithm consists of the combination of two different methods. The first, based on difference-image techniques and gaussianity statistical tests, finds out the most obvious signals. The second, is able to discover more subtle microcalcifications by exploiting a multiresolution analysis by means of the wavelet transform. We can separately tune the two methods, so that each one of them is able to detect signals with similar features. By combining signals coming out from the two parts through a logical OR operation, we can discover microcalcifications with different characteristics. Our algorithm yields a sensitivity of 91.4% with 0.4 false positive cluster per image on the 40 images of the Nijmegen database.
Quantitative evaluation of the changes in skin topographic structures are of great importance in the dermocosmetic field to assess subjects response to medical or cosmetic treatments. Although many devices and methods are known to measure these changes, they are not suitable for a routine approach and most of them are invasive. Moreover, it has always been difficult to give a measure of the skin health status as well as of the human aging process by simply analyzing the skin surface appearance. This work describes how a portable capacitive device could be utilized to achieve measurements of skin ageing in vivo and routinely. The capacitive images give a high resolution representation of the skin micro-relief, both in terms of skin surface tissue and wrinkles. In a previous work we dealt with the former; here we have addressed the latter. The algorithm we have developed allowed us to extract two original features from wrinkles: the first is based on photometric properties while the second has been achieved through the multiresolution analysis of the wavelet transform. Accurate experiments accomplished on 87 subjects show how the features we conceived are related to skin ageing.
This paper describes the investigation of bending strength and elastic wave signal characteristics of Si3N4 monolithic and Si3N4/SiC composite ceramics with crack healing ability. The elastic wave signals, generated during the compression load by a Vickers indenter on the brittle materials, were recorded in real time, and the AE signals were analyzed by the time-frequency analysis method. The three-point bending test was performed on the Si3N4 monolithic and Si3N4/SiC composite ceramic specimens with/without crack-healed. Consequently the bending strength of the crack-healed specimens at 1300°C was completely recovered up to that of the smooth specimens. And the frequency properties of crack-healed specimens tended to be similar to the distribution of the dominant smooth specimens frequency. This study suggests that the results of the signal information for the anisotropic ceramics show a feasible technique to guarantee structural integrity of a ceramic component.
On the basis of the lattice Boltzmann method for the Navier–Stokes equation, we have done a numerical experiment of a forced turbulence in real space and time. Our new findings are summarized into two points. Firstly, in the analysis of the mean-field behavior of the velocity field using the exit-time statistics, we have verified Kolmogorov's scaling and Taylor's hypothesis at the same time. Secondly, in the analysis of the intermittent velocity fluctuations using a non-equilibrium probability distribution function and the wavelet denoising, we have clarified that the coherent vortices sustain the power-law velocity correlation in the non-equilibrium state.
Blood component non-invasive measurement based on near-infrared (NIR) spectroscopy has become a favorite topic in the field of biomedicine. However, the various noises from instrument measurement and the varying background from absorption of other components (except target analyte) in blood are the main causes, which influenced the prediction accuracy of multivariable calibration. Thinking of backgrounds and noises are always found in high-scale approximation and low-scale detail coefficients. It is possible to identify them by wavelet transform (WT), which has multi-resolution trait and can break spectral signals into different frequency components retaining the same resolution as the original signal. Meanwhile, associating with a criterion of uninformative variable elimination (UVE), it is better to eliminate backgrounds and noises simultaneously and visually. Basic principle and application technology of this pretreatment method, wavelet transform with UVE criterion, were presented in this paper. Three experimental near-infrared spectra data sets, including aqueous solution with four components data sets, plasma data sets, body oral glucose tolerance test (OGTT) data sets, which, including glucose (the target analyte in this study), have all been used in this paper as examples to explain this pretreatment method. The effect of selected wavelength bands in the pretreatment process were discussed, and then the adaptability of different pretreatment method for the uncertainty complex NIR spectra model in blood component non-invasive measurements were also analyzed. This research indicates that the pretreatment methods of wavelet transform with UVE criterion can be used to eliminate varying backgrounds and noises for experimental NIR spectra data directly. Under the spectra area of 1100 to 1700 nm, utilizing this pretreatment method is helpful for us to get a more simple and higher precision multivariable calibration for blood glucose non-invasive measurement. Furthermore, by comparing with some other pretreatment methods, the results imply that the method applied in this study has more adaptability for the complex NIR spectra model. This study gives us another path for improving the blood component non-invasive measurement technique based on NIR spectroscopy.
In stereo research the construction of a dense disparity map is a complicated task when the scene contains a lot of occlusions. In this case in the neighborhood of occlusions, we could consider that the images have a non-stationary behavior. In this paper we propose a new method for computing a dense disparity map using the decomposition of the 2D wavelet transform in four quarters allowing us to find a corresponding pixel in the case of occlusion as well. Our algorithm constructs in each pixel of two images four estimators corresponding to each quarter. The matching of the four wavelet coefficient estimators in the right image with the other four in the left image allow us to construct a dense map disparity map in each pixel of an image.
Handwriting-based personal identification, which is also called handwriting-based writer identification, is an active research topic in pattern recognition. Despite continuous effort, offline handwriting-based writer identification still remains as a challenging problem because writing features can only be extracted from the handwriting image. As a result, plenty of dynamic writing information, which is very valuable for writer identification, is unavailable for offline writer identification. In this paper, we present a novel wavelet-based Generalized Gaussian Density (GGD) method for offline writer identification. Compared with the 2-D Gabor model, which is currently widely acknowledged as a good method for offline handwriting identification, GGD method not only achieves a better identification accuracy but also greatly reduces the elapsed time on calculation in our experiments.
Edges are prominent features in images. The detection and analysis of edges are key issues in image processing, computer vision and pattern recognition. Wavelet provides a powerful tool to analyze the local regularity of signals. Wavelet transform has been successfully applied to the analysis and detection of edges. A great number of wavelet-based edge detection methods have been proposed over the past years. The objective of this paper is to give a brief review of these methods, and encourage the research of this topic. In practice, an image is usually of multistructure edge, the identification of different edges, such as steps, curves and junctions play an important role in pattern recognition. In this paper, more attention is paid on the identification of different types of edges. We present the main idea and the properties of these methods.
To design an effective and robust fingerprint recognition method is still an open issue. Some texture-based methods such as directional energy method and wavelet method are available nowadays. However, directional energy method is insufficient to capture the detail information of fingerprint and it is also improper to directly use wavelet method to extract the feature since the complex and rich edge information of fingerprint. In this work we propose a texture-based method called DFB-Wavelet for fingerprint recognition via combining directional filter banks (DFB) and wavelet. The region of interest (ROI) composed of nonoverlapping square blocks, is decomposed into eight directions by employing DFB. Wavelet signatures are calculated as the features of a fingerprint image from each directional subband of DFB. The feature matching is performed on the global normalized Euclidean distance between the input fingerprint features and the templates features. Experimental results show that DFB-Wavelet method has the higher accuracy compared to the traditional texture-based methods.
Three-dimensional reconstruction of teeth plays an important role in the operation of living dental implants. However, the tissue around teeth and the noise generated in the process of image acquisition bring a serious impact on the reconstruction results, which must be reduced or eliminated. Combined with the advantages of wavelet transform and bilateral filtering, this paper proposes an image denoising method based on the above methods. The method proposed in this paper not only removes the noise but also preserves the image edge details. The noise in high frequency subbands is denoised using a locally adaptive thresholding and the noise in low frequency subbands is filtered by the bilateral filtering. Peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) and 3D reconstruction using the iso-surface extraction method are used to evaluate the denoising effect. The experimental results show that the proposed method is better than the wavelet denoising and bilateral filtering, and the reconstruction results meet the requirements of clinical diagnosis.
Saliency detection refers to the segmentation of all visually conspicuous objects from various backgrounds. The purpose is to produce an object-mask that overlaps the salient regions annotated by human vision. In this paper, we propose an efficient bottom-up saliency detection model based on wavelet generalized lifting. It requires no kernels with implicit assumptions and prior knowledge. Multiscale wavelet analysis is performed on broadly tuned color feature channels to include a wide range of spatial-frequency information. A nonlinear wavelet filter bank is designed to emphasize the wavelet coefficients, and then a saliency map is obtained through linear combination of the enhanced wavelet coefficients. This full-resolution saliency map uniformly highlights multiple salient objects of different sizes and shapes. An object-mask is constructed by the adaptive thresholding scheme on the saliency maps. Experimental results show that the proposed model outperforms the existing state-of-the-art competitors on two benchmark datasets.
Inertial navigation system (INS) is often integrated with satellite navigation systems to achieve the required precision at high-speed applications. In global navigation system (GPS)/INS integration systems, GPS outages are unavoidable and a severe challenge. Moreover, because of the usage of low-cost microelectromechanical sensors (MEMS) with noisy outputs, the INS will get diverged during GPS outages, and that is why navigation precision severely decreases in commercial applications. In this paper, we improve GPS/INS integration system during GPS outages using extended Kalman filter (EKF) and artificial intelligence (AI) together. In this integration algorithm, the AI receives the angular rates and specific forces from the inertial measurement unit (IMU) and velocity from the INS at t and t−1. Therefore, the AI has positioning and timing data of the INS. While the GPS signals are available, the output of the AI is compared with the GPS increment; so that the AI is trained. During GPS outages, the AI will practically play the GPS role. Thus, it can prevent the divergence of the GPS/INS integration system in GPS-denied environments. Furthermore, we utilize neural networks (NNs) as an AI module in five different types: multi-layer perceptron (MLP) NN, radial basis function (RBF) NN, wavelet NN, support vector regression (SVR) and adaptive neuro-fuzzy inference system (ANFIS). To evaluate the proposed approach, we utilize a real dataset that has been gathered by a mini-airplane. The results demonstrate that the proposed approach outperforms the INS and GPS/INS integration systems with the EKF during GPS outages. Meanwhile, the ANFIS also reached more than 47.77% precision compared to the traditional method.
This paper is a continuation of Part I where the authors treated the Fourier analysis of chaotic time series generated by a chaotic interval map. Here, we perform multiresolution analysis by using wavelet coefficients and characterize some necessary and sufficient conditions for the occurrence of chaos by the exponential growth with respect to the number of iterations n of certain sums of the wavelet coefficients.
Magnetic flow meters (magmeters) are instruments for measuring the velocity of flow in many industrial applications. The signal that comes from a magmeter is noisy and conventional approaches are often not effective enough in dealing with actual field noise. Furthermore, diagnostic functions are attracting increasing attention, due to the possibility of implementing them in an inexpensive and reliable manner in magmeter hardware. Neural networks have proven capabilities for both learning and data handling in noisy circumstances. In this paper, a novel approach based on wavelet neural networks is presented to attack these two objectives. The stability, accuracy and response time of the new approach has been tested, and found to be superior to conventional approaches.
In this paper, we propose a new approach to classify emotional stress in the two main areas of the valance-arousal space by using bio-signals. Since electroencephalogram (EEG) is widely used in biomedical research, it is used as the main signal. We designed an efficient acquisition protocol to acquire the EEG and psychophysiological. Two specific areas of the valence-arousal emotional stress space are defined, corresponding to negatively excited and calm-neutral states. Qualitative and quantitative evaluation of psychophysiological signals have been used to select suitable segments of EEG signal for improving efficiency and performance of emotional stress recognition system. After pre-processing the EEG signals, wavelet coefficients and chaotic invariants like fractal dimension, correlation dimension and wavelet entropy were used to extract the features of the signal. So, by using independent-sample T-Test and Linear Discriminate Analysis (LDA), effective features are selected. The results show that, the average classification accuracy were 80.1% and 84.9% for two categories of emotional stress states using the LDA and Support Vector Machine (SVM) classifiers respectively. We achieved an improvement in accuracy, in compared to our previous studies in the similar field. Therefore, this new fusion link between EEG and psychophysiological signals are more robust in comparison to the separate signals.
Based on the continuous Wavelet Transform Modulus Maxima method (WTMM), a multifractal analysis was introduced to discriminate the irregular fracture signals of materials. This method provides an efficient numerical technique to characterize statistically the local regularity of fractures.
The results obtained by this nonlinear analysis suggest that multifractal parameters such as the capacity dimension D0, the average singularity strength α0, the aperture of the left side (α0 - αmin) and the total width (αmax - αmin) of the D(α) spectra allow a better fit for the characterization of the different fracture stages.
Discriminating the three principal stages of the fracture namely the fracture initiation, the fracture propagation and the final rupture, provides a powerful diagnostic tool to identify the crack initiation site, and thus delineates the causes of the cracking of the material.
Complex systems, as interwoven miscellaneous interacting entities that emerge and evolve through self-organization in a myriad of spiraling contexts, exhibit subtleties on global scale besides steering the way to understand complexity which has been under evolutionary processes with unfolding cumulative nature wherein order is viewed as the unifying framework. Indicating the striking feature of non-separability in components, a complex system cannot be understood in terms of the individual isolated constituents’ properties per se, it can rather be comprehended as a way to multilevel approach systems behavior with systems whose emergent behavior and pattern transcend the characteristics of ubiquitous units composing the system itself. This observation specifies a change of scientific paradigm, presenting that a reductionist perspective does not by any means imply a constructionist view; and in that vein, complex systems science, associated with multiscale problems, is regarded as ascendancy of emergence over reductionism and level of mechanistic insight evolving into complex system. While evolvability being related to the species and humans owing their existence to their ancestors’ capability with regards to adapting, emerging and evolving besides the relation between complexity of models, designs, visualization and optimality, a horizon that can take into account the subtleties making their own means of solutions applicable is to be entailed by complexity. Such views attach their germane importance to the future science of complexity which may probably be best regarded as a minimal history congruent with observable variations, namely the most parallelizable or symmetric process which can turn random inputs into regular outputs. Interestingly enough, chaos and nonlinear systems come into this picture as cousins of complexity which with tons of its components are involved in a hectic interaction with one another in a nonlinear fashion amongst the other related systems and fields. Relation, in mathematics, is a way of connecting two or more things, which is to say numbers, sets or other mathematical objects, and it is a relation that describes the way the things are interrelated to facilitate making sense of complex mathematical systems. Accordingly, mathematical modeling and scientific computing are proven principal tools toward the solution of problems arising in complex systems’ exploration with sound, stimulating and innovative aspects attributed to data science as a tailored-made discipline to enable making sense out of voluminous (-big) data. Regarding the computation of the complexity of any mathematical model, conducting the analyses over the run time is related to the sort of data determined and employed along with the methods. This enables the possibility of examining the data applied in the study, which is dependent on the capacity of the computer at work. Besides these, varying capacities of the computers have impact on the results; nevertheless, the application of the method on the code step by step must be taken into consideration. In this sense, the definition of complexity evaluated over different data lends a broader applicability range with more realism and convenience since the process is dependent on concrete mathematical foundations. All of these indicate that the methods need to be investigated based on their mathematical foundation together with the methods. In that way, it can become foreseeable what level of complexity will emerge for any data desired to be employed. With relation to fractals, fractal theory and analysis are geared toward assessing the fractal characteristics of data, several methods being at stake to assign fractal dimensions to the datasets, and within that perspective, fractal analysis provides expansion of knowledge regarding the functions and structures of complex systems while acting as a potential means to evaluate the novel areas of research and to capture the roughness of objects, their nonlinearity, randomness, and so on. The idea of fractional-order integration and differentiation as well as the inverse relationship between them lends fractional calculus applications in various fields spanning across science, medicine and engineering, amongst the others. The approach of fractional calculus, within mathematics-informed frameworks employed to enable reliable comprehension into complex processes which encompass an array of temporal and spatial scales notably provides the novel applicable models through fractional-order calculus to optimization methods. Computational science and modeling, notwithstanding, are oriented toward the simulation and investigation of complex systems through the use of computers by making use of domains ranging from mathematics to physics as well as computer science. A computational model consisting of numerous variables that characterize the system under consideration allows the performing of many simulated experiments via computerized means. Furthermore, Artificial Intelligence (AI) techniques whether combined or not with fractal, fractional analysis as well as mathematical models have enabled various applications including the prediction of mechanisms ranging extensively from living organisms to other interactions across incredible spectra besides providing solutions to real-world complex problems both on local and global scale. While enabling model accuracy maximization, AI can also ensure the minimization of functions such as computational burden. Relatedly, level of complexity, often employed in computer science for decision-making and problem-solving processes, aims to evaluate the difficulty of algorithms, and by so doing, it helps to determine the number of required resources and time for task completion. Computational (-algorithmic) complexity, referring to the measure of the amount of computing resources (memory and storage) which a specific algorithm consumes when it is run, essentially signifies the complexity of an algorithm, yielding an approximate sense of the volume of computing resources and seeking to prove the input data with different values and sizes. Computational complexity, with search algorithms and solution landscapes, eventually points toward reductions vis à vis universality to explore varying degrees of problems with different ranges of predictability. Taken together, this line of sophisticated and computer-assisted proof approach can fulfill the requirements of accuracy, interpretability, predictability and reliance on mathematical sciences with the assistance of AI and machine learning being at the plinth of and at the intersection with different domains among many other related points in line with the concurrent technical analyses, computing processes, computational foundations and mathematical modeling. Consequently, as distinctive from the other ones, our special issue series provides a novel direction for stimulating, refreshing and innovative interdisciplinary, multidisciplinary and transdisciplinary understanding and research in model-based, data-driven modes to be able to obtain feasible accurate solutions, designed simulations, optimization processes, among many more. Hence, we address the theoretical reflections on how all these processes are modeled, merging all together the advanced methods, mathematical analyses, computational technologies, quantum means elaborating and exhibiting the implications of applicable approaches in real-world systems and other related domains.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.