The vehicle scanning method (VSM), originally known as the indirect method, is an efficient method for bridge health monitoring that utilizes mainly the responses collected by the moving test vehicles. This method offers the advantage of mobility, efficiency, and cost-effectiveness as it requires only one or a few vibration sensors mounted on the test vehicle, eliminating the need for deployment of numerous sensors on the bridge. Since its initial proposal by Yang and co-workers in 2004, the VSM has gained intensive attention from researchers worldwide. Over the past two decades, significant progress has been made in various aspects of the VSM, including the identification of bridge frequencies, mode shapes, damping ratios, damages, and surface roughness, as well as its application to railways. Previously, some review papers and the book Vehicle Scanning Method for Bridges were published on the subject. However, research on the subject continues to boom at a speed that cannot be adequately by existing review papers or book, as judged by the fast-increasing number of relevant publications. In order to provide researchers with an overall understanding of the up-to-date researches on the VSM, a state-of-the-art review of the related research conducted worldwide is compiled in this paper. Comments and recommendations will be provided at appropriate points, and concluding remarks, including future research directions, will be presented at the end of the paper.
This paper describes a modification of the LArge Memory STorage And Retrieval (LAMSTAR) neural network. The purpose of the modification is to allow rare events a larger role in decision-making when they are strongly biased towards a particular decision. As a by-product, the modification also permits the introduction of a confidence measure. This measure allows comparison across different network inputs so that the user may choose the "best" solution. The authors have applied the modified LAMSTAR network to a financial forecasting problem.
Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.
In this work, we develop open source hardware and software for eye state classification and integrate it with a protocol for the Internet of Things (IoT). We design and build the hardware using a reduced number of components and with a very low-cost. Moreover, we propose a method for the detection of open eyes (oE) and closed eyes (cE) states based on computing a power ratio between different frequency bands of the acquired signal. We compare several real- and complex-valued transformations combined with two decision strategies: a threshold-based method and a linear discriminant analysis. Simulation results show both classifier accuracies and their corresponding system delays.
Advances in electroencephalography (EEG) equipment now allow monitoring of people with epilepsy in their daily-life environment. The large volumes of data that can be collected from long-term out-of-clinic monitoring require novel algorithms to process the recordings on board of the device to identify and log or transmit only relevant data epochs. Existing seizure-detection algorithms are generally designed for post-processing purposes, so that memory and computing power are rarely considered as constraints. We propose a novel multi-channel EEG signal processing method for automated absence seizure detection which is specifically designed to run on a microcontroller with minimal memory and processing power. It is based on a linear multi-channel filter that is precomputed offline in a data-driven fashion based on the spatial-temporal signature of the seizure and peak interference statistics. At run-time, the algorithm requires only standard linear filtering operations, which are cheap and efficient to compute, in particular on microcontrollers with a multiply-accumulate unit (MAC). For validation, a dataset of eight patients with juvenile absence epilepsy was collected. Patients were equipped with a 20-channel mobile EEG unit and discharged for a day-long recording. The algorithm achieves a median of 0.5 false detections per day at 95% sensitivity. We compare our algorithm with state-of-the-art absence seizure detection algorithms and conclude it performs on par with these at a much lower computational cost.
Human activity recognition is an application of machine learning with the aim of identifying activities from the gathered activity raw data acquired by different sensors. In medicine, human gait is commonly analyzed by doctors to detect abnormalities and determine possible treatments for the patient. Monitoring the patient’s activity is paramount in evaluating the treatment’s evolution. This type of classification is still not enough precise, which may lead to unfavorable reactions and responses. A novel methodology that reduces the complexity of extracting features from multimodal sensors is proposed to improve human activity classification based on accelerometer data. A sliding window technique is used to demarcate the first dominant spectral amplitude, decreasing dimensionality and improving feature extraction. In this work, we compared several state-of-art machine learning classifiers evaluated on the HuGaDB dataset and validated on our dataset. Several configurations to reduce features and training time were analyzed using multimodal sensors: all-axis spectrum, single-axis spectrum, and sensor reduction.
Unsupervised statistical learning (USL) techniques, such as self-organizing maps (SOMs), principal component analysis (PCA) and independent component analysis explore different statistical properties to efficiently process information from multiple variables. USL algorithms have been successfully applied in experimental high-energy physics (HEP) and related areas for different purposes, such as feature extraction, signal detection, noise reduction, signal-background separation and removal of cross-interference from multiple signal sources in multisensor measurement systems. This paper presents both a review of the theoretical aspects of these signal processing methods and examples of some successful applications in HEP and related areas experiments.
Much work has been done to optimize wavelet transforms for SIMD extensions of modern CPUs. However, these approaches are mostly restricted to the vertical part of 2-D transforms with line-wise organized memory layouts because this leads to a rather straight forward SIMD-implementation. This work shows for an example of a common wavelet filter new approaches to use SIMD operations on 1-D transforms that are able to produce reasonable speedups. As a result, the performance of algorithms that use wavelet transforms, such as JPEG2000, can be increased significantly. Various variants of parallelization are presented and compared. Their advantages and disadvantages for general filters are discussed.
A method for interpreting elastic-lidar return signals in heavily-polluted atmospheres is presented. It is based on an equation derived directly from the classic lidar equation, which highlights gradients of the atmospheric backscattering properties along the laser optical path. The method is evaluated by comparing its results with those obtained with the differential absorption technique. The results were obtained from locating and ranging measurements in pollutant plumes and contaminated environments around central México.
The present paper describes a novel approach to performing feature extraction and classification in possibly layered circular structures, as seen in two-dimensional cutting planes of three-dimensional tube-shaped objects. The algorithm can therefore be used to analyze histological specimens of blood vessels as well as intravascular ultrasound (IVUS) datasets. The approach uses a radial signal-based extraction of textural features in combination with methods of machine learning to integrate a priori domain knowledge. The algorithm in principle solves a two-dimensional classification problem that is reduced to parallel viable time series analysis. A multiscale approach hereby determines a feature vector for each analysis using either a Wavelet-transform (WT) or a S-transform (ST). The classification is done by methods of machine learning — here support vector machines. A modified marching squares algorithm extracts the polygonal segments for the two-dimensional classification. The accuracy is above 80% even in datasets with a considerable quantity of artifacts, while the mean accuracy is above 90%. The benefit of the approach therefore mainly lies in its robustness, efficient calculation, and the integration of domain knowledge.
A common problem in Electroencephalogram (EEG) analysis is how to separate EEG patterns from noisy recordings. Independent component analysis (ICA), which is an effective method to recover independent sources from sensor outputs without assuming any a priori knowledge, has been widely used in such biological signals analysis. However, when dealing with EEG signals, the mixing model usually does not satisfy the standard ICA assumptions due to the time-variable structures of source signals. In this case, EEG patterns should be precisely separated and recognized in a short time window. Another issue is that we usually over-separate the signals by ICA due to the over learning problem when the length of data is not sufficient. In order to tackle these problems mentioned above, we try to exploit both high order statistics and temporal structures of source signals under condition of short time windows. We utilize a temporal-independent component analysis (tICA) method to formulate the blind separation problem into a new framework of analyzing the mutual independence of the residual signals. Furthermore, in order to find better features for classification, both temporal and spatial features of EEG recordings are extracted by integrating tICA together with some other algorithm like Common Spatial Pattern (CSP) for feature extraction. Computer simulations are given to evaluate the efficiency and performance of tICA based on EEG data recorded not from the normal people but from some special populations suffering from neurophysiological diseases like stroke. To the best of our knowledge, this is the first time that EEG characteristics of stroke patients are explored and reported using ICA algorithm. Superior separation performance and high classification rate evidence that the tICA method is promising for EEG analysis.
To effectively study vibration characteristics of tracks under different track structures, wavelet transforms of the vibration data are used for pattern classification of vibration feature. First, acceleration data of the track are collected with running speed of 150km/h at 26 positions respectively on a slab tangent track, ballast tangent track and ballast curve track by a wireless sensor network (WSN). Then they are analyzed using the power spectral densities (PSDs) and wavelet-based energy spectrum analysis. The paper elaborates on the reasons for the differences of vibration energy and excitation frequencies due to the mechanism of different frequency bands and the corresponding track structures. Based on these, the instantaneous frequencies, vibration energies and durations in the low, medium, and high frequency bands are selected as the features for three track structures. A function curve representing the features is proposed to detect the abnormal track structure by a correlation analysis. Finally, the proposed method of pattern classification has been validated by experimental testings.
The output power of wind turbine has great relation with its health state, and the health status assessment for wind turbines influences operational maintenance and economic benefit of wind farm. Aiming at the current problem that the health status for the whole machine in wind farm is hard to get accurately, in this paper, we propose a health status assessment method in order to assess and predict the health status of the whole wind turbine, which is based on the power prediction and Mahalanobis distance (MD). Firstly, on the basis of Bates theory, the scientific analysis for historical data from SCADA system in wind farm explains the relation between wind power and running states of wind turbines. Secondly, the active power prediction model is utilized to obtain the power forecasting value under the health status of wind turbines. And the difference between the forecasting value and actual value constructs the standard residual set which is seen as the benchmark of health status assessment for wind turbines. In the process of assessment, the test set residual is gained by network model. The MD is calculated by the test residual set and normal residual set and then normalized as the health status assessment value of wind turbines. This method innovatively constructs evaluation index which can reflect the electricity generating performance of wind turbines rapidly and precisely. So it effectively avoids the defect that the existing methods are generally and easily influenced by subjective consciousness. Finally, SCADA system data in one wind farm of Fujian province has been used to verify this method. The results indicate that this new method can make effective assessment for the health status variation trend of wind turbines and provide new means for fault warning of wind turbines.
With the rapid and bursting development of communication engineering and some related techniques, spread spectrum communication and sparse analysis have been a hot research topic in the research community. A novel anti-jamming driven sparse analysis-based spread spectrum communication methodology is proposed in this paper, which mainly increases the spread spectrum modulation and the spread spectrum demodulation in the receiving end. The process of spread spectrum communication according to the working methods of different methodologies including direct-sequence spread spectrum. In this paper, the sparse presentation, dictionary learning, anti-jamming analysis and the basic communication theories are integrated altogether to enhance the traditional spread spectrum communication analysis framework. The experimental result proves the robustness of the proposed method.
The early multi-point leakage source signals of urban gas pipeline are weak and can be easily affected by environmental noise and signal interference between adjacent sources, which causes large leakage positioning error. In this paper, an integrated signal processing method combining VMD, BSS and Relative entropy for multi-point pipeline leakage signal and source positioning is presented. Firstly, VMD and Relative entropy were employed to obtain effective IMF mode components and their features during the decomposition for pipeline leakage signal. Relative entropy was used to improve signal-to-noise ratio and extract the features of leakage signal. Then, BSS was used to decompose multi-point mixed leakage signals so as to obtain independent signal components. Finally, the time difference and wave velocity were, respectively, obtained by calculating the time domain distribution of the independent signal components and the main modal guided wave, so the precise positioning of the pipeline leakage was realized. The results show that the combined method proposed can not only select and extract leakage signal adaptively but also separate single independent signal from multi-point mixed signal, which helps to locate pipeline multi-point leakage sources more accurately.
In this paper, in order to figure out the variations before an earthquake and extract abnormal signals related to it, the geomagnetic three component Z, H, F minute values of 15 geomagnetic stations within 600km of the epicentral distance before the MS 6.6 Minxian–Zhangxian earthquake in Gansu were analyzed. The following are the results. (1) After the fractal analysis was used directly, only three geomagnetic stations in 15 geomagnetic stations showed synchronous anomalous signals; (2) After the method of this paper was used, 9 of the 15 geomagnetic stations (including the three stations in the first point) extracted two synchronous anomalous signals, and six of the nine geomagnetic stations presented additional synchronous anomalous signals. (3) Of the three abnormal signals extracted, one had a medium-term effect and two had short-term effects. (4) The anomalous duration of the Z component of nine geomagnetic stations was longer than that of H and F. And as the epicentral distance increased, duration decreased. While the proposed method could not clearly indicate the exact relationship between the anomalous signals and the earthquake, it was proved that the signals extracted are effective and well-correlative to the earthquake.
Recently, Low Density Parity-Check (LDPC) codes based on Affine Permutation Matrices (APM) drew lots of attention. Compared with the Quasi-Cyclic LDPC (QC-LDPC) codes, these kinds of codes have some advantages. APM-LDPC codes obtain lower cycle-distributions, minimum hamming distance and greater girth. This paper explains the importance of cyclic distribution by comparing APM-LDPC codes with QC-LDPC codes. Then a particular form of APM-LDPC codes is proposed and researched. The new codes can low down the cycle-distribution to larger extent. In the following research, an effective method, which constructs the new codes with fixed girth, is proposed. Simulations show that the construction method is reasonable and effective. The transmission performances are better than the traditional methods, as well. Finally, the implementation and verification are carried out on FPGA.
This paper describes a deep learning-based method for long-term video interpolation that generates intermediate frames between two music performance videos of a person playing a specific instrument. Recent advances in deep learning techniques have successfully generated realistic images with high-fidelity and high-resolution in short-term video interpolation. However, there is still room for improvement in long-term video interpolation due to lack of resolution and temporal consistency of the generated video. Particularly in music performance videos, the music and human performance motion need to be synchronized. We solved these problems by using human poses and music features essential for music performance in long-term video interpolation. By closely matching human poses with music and videos, it is possible to generate intermediate frames that synchronize with the music. Specifically, we obtain the human poses of the last frame of the first video and the first frame of the second video in the performance videos to be interpolated as key poses. Then, our encoder–decoder network estimates the human poses in the intermediate frames from the obtained key poses, with the music features as the condition. In order to construct an end-to-end network, we utilize a differentiable network that transforms the estimated human poses in vector form into the human pose in image form, such as human stick figures. Finally, a video-to-video synthesis network uses the stick figures to generate intermediate frames between two music performance videos. We found that the generated performance videos were of higher quality than the baseline method through quantitative experiments.
In order to solve the key problems of secondary surveillance radar signal debugging on special devices such as DSP and FPGA, a parallel processing scheme of secondary surveillance radar response signal is proposed based on CPU–GPU architecture. It can effectively reduce the difficulty of code development and improve the portability of program. The parallel optimization design of each processing model of response signal is made through the characteristics of shared memory in CPU–GPU architecture to improve the processing speed of the algorithm. The proposed scheme is tested and analyzed on different graphics cards by the secondary surveillance radar Mode-5 response signal in this paper. The experimental results showed that it takes 8390.52960 us to implement the signal processing algorithm based on NVIDIA Tesla K40c graphics card, which can save 51.98% of time than NVIDIA Quadro K4200 graphics card, and makes it possible to do the real-time processing of the secondary surveillance radar signal.
The concealed microcracks in shield tunnel lining present the characteristics of being of small size, unknown shape, and are difficult to detect. Based on the finite-difference time domain (FDTD) approach, this study proposed a new construction method of a refined grid accommodating and combining the variable shapes of microcracks, and capable of designing cross type, mesh type, and wave type microcrack models. The proposed new method also configured steel bars in the models to simulate actual engineering conditions, and characteristic response images of the models under different working conditions were obtained using ground penetrating radar (GPR) technology, which were then compared and analyzed to identify the imaging characteristics and differences of microcracks with variable geometric shapes. The waveform, amplitude, and time span of the characteristic single channel signal were furthermore studied. The results showed that the new method could successfully simulate the GPR characteristic response images of 0.5mm microcracks of diverse geometric shapes. When the microcracks were wavy, their real shape could only be determined after signal pre-processing; the density and quantity of steel bars directly affected the appearance of microcrack characteristic signals; the greater the density and quantity of steel bars, the greater the interference on the waveform, amplitude, and time-frequency range of electromagnetic wave signals; a special correlation existed between the maximum mean root square value of the amplitude and the single channel signal of the cracks. Moreover, the finding that the extension in time and distance in the GPR time distance profile intersected with the cracks was deemed potentially to provide fresh insights into identifying the characteristic points of the cracks in the GPR images. The new method proposed in this study successfully obtained the GPR numerical simulation images and characteristic signals of microcracks with variable geometric shapes. Through the processing and analysis of the characteristic response signals of microcracks, the conclusions obtained were considered to provide an interpretation basis for the detection of microcracks in practical engineering.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.