For the long-term continuous monitoring of bridge-related indicators, it is necessary to arrange relatively perfect acquisition equipment on the bridge, which can feedback various information parameters of the bridge. However, there are many parameters to feedback the bridge information, which leads to the complex and overstaffed structure of the monitoring system. Furthermore, the huge amount of data collected and the complex calculation process also increase the difficulty of the operation of the monitoring system. In this regard, we should choose more scientific and reasonable indicators, lightweight data structure, stable data transmission, and analysis programs to improve the accuracy of continuous monitoring. To establish a stable and efficient bridge monitoring system, we use the distance coefficient-effective independent algorithm to optimize. Then, we calculate the relevant information of the strain environment with the help of a neural network model, strengthen the training of deep learning through the YOLOv5s model, and improve the task scheduling strategy of attention concentration. Through that, we solve the problem of embedded systems with relatively low computing power. Different weights are assigned to each fused feature map, and the nodes at the highest level and the lowest level are deleted so that a concise and efficient lightweight network model is constructed. Multiple iterations are performed to achieve deeper feature fusion. Therefore, the complexity of the model is effectively reduced, and the monitoring performance can be effectively improved. Finally, through the experimental analysis, it is proved that compared with the traditional fusion model, the number of parameters of the improved fusion network structure in bridge health monitoring is reduced by 7.37%. The detection speed is increased by 18.2%. The amount of computation is reduced by 42.92%, and the average detection accuracy is required to reach 95.33%. It is verified that the proposed method can effectively improve the accuracy and risk control ability of the detection data by learning from the samples with small labels. It also has great practical significance and market value for the design and optimization of the bridge health monitoring system, which is suitable for the monitoring data of large-scale construction projects.
When a fault occurs in the distribution network, the power restoration system receives the relevant data and quickly analyzes and evaluates it. After the distribution network is rebuilt to locate and isolate the fault, the network distribution constraints are operated to find the optimal solution of multiple objective functions and the best choice for power restoration. At present, most of the complete solutions adopt the traditional transformation to a single goal to solve the problem. If the implementation of the scheme is complex, the amount of data is very large, affecting the efficiency of the reconstruction model. In this paper, we use the distributed network to select the fault node of the distribution network. We consider the combination of the modified droop control coefficient with the consensus algorithm to realize the secondary optimal control of the multi-microgrid system. We also adopt the real-time dynamic data to follow the target value to ensure the power supply load’s controllable range to avoid the occurrence of regional blackouts. According to the information exchange mode between distribution network nodes and microgrids, a data analysis model of sparse communication is established to control the power output and load capacity of each node in the distribution network area. The composite Depth-First Search algorithm constructed in this paper can control the needs of multi-node energy matching service as well as the fault analysis model, thus realizing the function solution of the preferred node’s output. The simulation model’s experimental results verify that the elastic effect of the distribution network microgrid group operation mode is improved after the optimization with the security as the objective. The results also show that the theory and method in this paper are superior to the current mainstream controllable scheme because they have the advantages of short response time, strong anti-interference ability, and good noise reduction effect, supporting the emergency load’s controllable management in the distribution network.
With the gradual maturity of deep learning and the Internet of Things, a smart grid is urgently needed to be established. For the intellectualization of the power grid, non-intrusive load monitoring (NILM) technology is the key step of intellectualization. The traditional load monitoring method is an invasive scheme, in which monitoring sensors are usually installed at the output end of each grid load. This method requires a lot of manpower and material resources in terms of equipment installation and maintenance, which is difficult to sustain. Therefore, monitoring electrical appliances are installed at the entrance of the distribution network to decompose the classification and operation status of the individual electrical appliances in the grid. Aiming at the problem of long-term multi-state electrical apparatus monitoring, this paper mainly optimizes the two optimal deep learning models Seq2Seq and WindowGRU based on activation function optimization, regular function optimization and neural network structure optimization. PReLU, Leaky-ReLU and ELU are used and tested. For the optimization based on regular functions, the value of F1-score is 0.9700, which is 0.1534 more than the optimization algorithm Seq2SeqLR in this paper. Meanwhile, it can also reach 0.8100 for other devices, which is 0.2373 more than the nilmTCN designed in this paper. The Recall value of the identification is improved to 0.6673, which is significantly higher than that of other models. It is proved that the research contents of this paper can improve the effective analysis of load energy consumption data, and can minimize unnecessary energy consumption to achieve the purpose of saving electricity.
With the rapid expansion of power grids and increasing user demand, effectively identifying and monitoring abnormal electricity consumption have become crucial for ensuring grid stability and operational efficiency. Traditional anomaly detection methods often struggle with scalability and accuracy, particularly as the volume and density of electricity data grow. To address these challenges, this paper introduces a novel electricity anomaly monitoring framework that integrates real-time data acquisition and advanced classification modeling techniques. Our approach leverages a parallel classification algorithm designed to efficiently handle large datasets and detect anomalies with high accuracy. Key features of abnormal user data are extracted using information entropy, and electricity consumption data are continuously collected through a wireless network. The proposed method then preprocesses and classifies the data, applying a random forest model to detect anomalies and monitor usage patterns. Experimental results indicate that our approach significantly enhances both the accuracy and efficiency of electricity anomaly detection, demonstrating its robustness and potential for large-scale deployment in power grid systems.
An external scanning ion microbeam system has been developed for in-air micro-PIXE analysis at JAERI Takasaki. The analysis system is widely used for various researches in recent years. The system consists of the external scanning ion microbeam system, a multi-parameter data acquisition system, a file transfer protocol (FTP) server and analysis software. The software of the system provides a graphical user interface for interaction between users and an experimental setup. The server is connected to the Internet and allows remote users to access the experimental data.
Scanning nuclear microprobes using Rutherford backscattering (RBS) and particle-induced X-ray emission (PIXE) with light ions have been formed using variable objective slits and a magnetic quadrupole doublet. Beam optics, focusing techniques, factors limiting the minimum beam-spot size, and data acquisition systems are discussed. Two- and three-dimensional RBS mapping and channeling contrast mapping of processed semiconductor layers such as multilayered wiring and focused ion-implanted layers are demonstrated. Problems with microbeam analysis such as radiation damages due to the probe beams are discussed.
This paper addresses problems in underwater seismic image analysis and interpretation. We discuss basic issues involving acquisition and processing of signals due to reflections and multiple reflections from the sea bed and the underlying media. Aspects of compensating for the dynamic heave component present in the signals, using conventional and parallel Kalman filtering are discussed. Considerations for modeling the heave process are treated briefly. A number of information extraction algorithms that are central to the interpretation process are covered. Emphasis is given to delay and amplitude parameter estimation using a combined cross-relation-minimum variance filter, linearized recursive estimation and an event enhancement filter. Ingredients of a knowledge-based interpretation system for image analysis and interpretation are discussed.
Electroencephalography (EEG) is the recording of electrical activity of the brain. The 10–20 system is the standard electrode location method used to acquire EEG data, which uses 21 electrodes to record the electrical activity of the brain. Patient preparation and correct electrode placement are important to obtain reliable outputs. The current 10–20 system consumes greater time for patient preparation and also causes discomfort due to a higher number of electrodes being used or wearing an uncomfortable cap. This paper focuses on reducing the number of electrodes, thus reducing patient discomfort as well as preparation time. Advancement in the field of hardware and software processing has led to the utilization of brain waves for communication between human and the computer. This work deals with EEG-based Brain–Machine Interface (BMI) intended for designing a portable single-channel EEG signal acquisition system. EEG signal was acquired using the data acquisition module [National Instruments (NI) myDAQ] and the signal was viewed in the NI Laboratory Virtual Instrument Engineering Workbench (LabVIEW) environment. It was observed that the peak-to-peak amplitude of alpha, beta and theta waves changes in accordance with the activity the subjects performed. Thus, the developed instrument was tested on 10 different subjects to acquire the alpha, beta and theta waves by performing different activities. From the results, it can be concluded that the developed system can be used for studying a person’s brain waves (alpha, beta and theta) based on the activity performed by the subject with a limited number of electrodes.
In this paper, we report a new third-order chaotic jerk system with double-hump (bimodal) nonlinearity. The bimodal nonlinearity is of basic interest in biology, physics, etc. The proposed jerk system is able to exhibit chaotic response with proper choice of parameters. Importantly, the chaotic response is also obtained from the system by tuning the nonlinearity preserving its bimodal form. We analytically obtain the symmetry, dissipativity and stability of the system and find the Hopf bifurcation condition for the emergence of oscillation. Numerical investigations are carried out and different dynamics emerging from the system are identified through the calculation of eigenvalue spectrum, two-parameter and single parameter bifurcation diagrams, Lyapunov exponent spectrum and Kaplan–Yorke dimension. We identify that the form of the nonlinearity may bring the system to chaotic regime. Effect of variation of parameters that controls the form of the nonlinearity is studied. Finally, we design the proposed system in an electronic hardware level experiment and study its behavior in the presence of noise, fluctuations, parameter mismatch, etc. The experimental results are in good analogy with that of the analytical and numerical ones.
Since bulk transfer bandwidth of the host is unstable, the universal serial bus (USB) 3.0 hyperspectral data transfer system can only achieve a data transfer rate of about 30 MBps which is less than one-fifteenth of USB 3.0 theoretical transfer rate of 5 Gbps. For aerial hyperspectral imager, data transfer system is required to meet different frame rates of detector for different speed-to-height ratios. In this paper, we propose a high-speed and adjustable synchronous transfer system. The USB 3.0 peripheral controller uses synchronous first in first out (FIFO) and automatic direct memory access (DMA) to achieve the highest data transfer bandwidth. The USB acquisition software collects a data block in every fixed time interval. The size in bytes of every data block must be an integer multiple of the maximum data packet payload size, which is a necessary condition for using automatic DMA and bulk transfers. The data transfer rate of the system could be adjusted by directly changing the data block size and acquisition time interval. The experimental results show that the synchronous transfer mechanism could facilitate the 100-MBps error-free and high data transfer bandwidth application on a hyperspectral data processing system.
In this paper, a real-time low-cost geophone-based Elephant Footstep Vibration Detection and Identification (EFVDI) system is proposed. The system design started with a real-time low-cost generalized Footstep Vibration Recording and Analyzing (FVRA) system. A series of field experiments to record elephant footstep vibration (target) signals and other possible interfering ground vibration (noise) sources are conducted using the FVRA system. System’s actual field performance was evaluated in terms of maximum detection range, signal amplitude, detection ratio, signal frequency, signal time span, etc. Variations of system’s performance with several input parameters are also investigated. The recorded signals from target as well as noise sources are analyzed to extract different Signal Parameters (SPs). All SPs are saved in a Ground Vibration Signal Pattern Library (GVSPL) which is then used to frame accurate indigenous Elephant Identification Algorithm (EIA). The EIA is embedded in FVRA system to reshape it as specific Elephant Footstep Vibration Detection and Identification (EFVDI) system. The EFVDI system has successfully segregated elephant footsteps from other noise vibrations with high accuracy under simulated field experiment. The results from the proposed system will provide important data to the ongoing research of developing the much needed highly accurate Elephant Early Warning System (EEWS) in future.
Software updates are one of the most critical considerations in the production phase of any IoT project at a very early stage. If a production team maintains the program, anything that goes wrong can be corrected, and new characteristics can be implemented promptly, provided remote upgrades are successful, secure, and trusted. The fast growth of embedded and wireless technologies has eventually led to the convergence of embedded and wireless systems. Therefore, products should be pushed into development beforehand to provide the potential for a different approach, including remote upgrades if new enhancements are ready or if bugs have been remedied with the existing version. For the most part, it’s good to have a software upgrade mechanism during the very early stages of the product or application definition. The paper offers an ARM and FPGA-based data acquisition framework. It aims to capture the bandwidth of wireless signals from 70MHz to 6GHz using the proposed framework in the article. The framework included the user interface to configure sample rate, bandwidth, sample center frequency, sample time, and data storage parameters for sampling — the proposed framework consists of software and software architecture for the hardware framework. The hardware is supported by ARM and FPGA. The software is based on the Xilinx software development kit (SDK). In particular, for fast transmission and storing of data over ETHERNET, a remote data acquisition scheme based on the server and client model is built.
The development of high-speed railway networks and the increased running speeds of high-speed trains (HSTs) have made the aerodynamic interference between HSTs and their surrounding environments increasingly important. Compared with a traditional wind tunnel test, systematically understanding the aerodynamic characteristics of HSTs involves relatively more stringent requirements, highlighting the need to develop experimental methods and technologies with enhanced dynamic performance. Central South University (CSU) developed a wireless data acquisition system, named as the in-model sensory and wireless data acquisition — remote control and processing system (ISWDA-RCPS), which can operate onboard a novel moving train and infrastructure rig. The system was developed to meet current wind tunnel data collection needs, and it avoids the physical cables used in conventional devices, which are extremely susceptible to induced noise. The system accepts inputs from various sensors and transfers the data wirelessly to an access point outside a wind tunnel’s test section. To analyze the feasibility of the ISWDA-RCPS concerning its sensing capabilities and wireless communications, we conduct experiments in multiple operating conditions. Finally, pressure measurements are acquired from a moving Fuxing HST model at different points and used to analyze the aerodynamic behavior of the model.
Visual stylometry is the task of analysing visual art by mathematical and statistical methods. One branch of visual stylometry is to classify if a painting/drawing is made by the claimed artist; that this is achievable has been demonstrated for several artists and by different approaches. The present authors developed a contourlet-based classification method in a previous paper and here it is investigated how robust the conclusions of this method are to variations in the data collection.
The growing interest in gait recognition based on surface electromyography (sEMG) signals is attributed to their capability to anticipate motion characteristics during human movement. This paper focuses on gait pattern recognition using sEMG signals. Initially, the muscles responsible for collecting sEMG signals are determined based on the distinct characteristics of human gait, and data for 12 different gait patterns are collected. Subsequently, the acquired sEMG signals undergo preprocessing and feature extraction stages. Moreover, various algorithms relevant to gait classification based on surface myoelectric signals are investigated. In this study, we propose an improved particle swarm optimization algorithm (MPSO-LSTM) for accurately classifying gait patterns using surface myoelectric signals. Experimental results demonstrate the effectiveness of the MPSO-LSTM algorithm in gait recognition based on sEMG signals.
Today, data processing has become a challenging task due to the significant increase in the amount of data collected using various sensors. To put up knowledge and forecast the data, the existing data mining techniques compute all numerical attributes in the memory simultaneously. However, the over-abundance of entire factors in the data makes accurate prediction infeasible. This paper attempts to implement a new data prediction model using an optimized machine learning algorithm. The proposed data prediction model involves four main phases: (a) data acquisition, (b) feature extraction, (c) data normalization, and (d) prediction. Initially, few data from the UCI repository like Bike Sharing Dataset, Carbon Nanotubes, Concrete Compressive Strength, Electrical Grid Stability Simulated Data, and SkillCraft-1 Master Table are collected. Further, the feature extraction process extracts the first-order statistics like mean, median, standard deviation, the maximum value of entire data, and the minimum value of entire data, and the second-order statistics like kurtosis, skewness, energy, and entropy. Next, the data or feature normalization is done to arrange the data within a certain limit. The normalized features are then subjected to a hybrid prediction system by integrating the Recurrent Neural Network (RNN) and Fuzzy Regression model. As a modification, the number of hidden neurons in the RNN and membership limits of the Fuzzy Regression model are optimized by a hybrid optimization algorithm by merging the concepts of Whale Optimization Algorithm (WOA) and Cat Swarm Optimization (CSO), which is called the Whale Updated Seek Mode-based CSO (WS-CSO) algorithm. Then, the efficiency of the optimized hybrid classifier for all-time prediction of data in different applications is confirmed based on its valuable performance and comparative analysis.
The Kirchhoff-Law-Johnson-Noise (KLJN) secure key distribution system provides a way of exchanging secure keys by using classical physics (electricity and thermodynamics). Several theoretical studies have addressed the performance and applicability of the communication protocol, and they have indicated that it is protected against all known types of attacks. However, until now, there have been very few real physical implementations and experimental tests of the protocol. With our work, we continue filling this gap. Details of implementing a KLJN based system are presented using a dedicated hardware and an off-the-shelf solution as well. Furthermore, the results of experimental tests and analysis of the performance will be presented.
A signal processing hardware platform has been developed for the Low Frequency Aperture Array component of the Square Kilometre Array (SKA). The processing board, called an Analog Digital Unit (ADU), is able to acquire and digitize broadband (up to 500MHz bandwidth) radio-frequency streams from 16 dual polarized antennas, channel the data streams and then combine them flexibly as part of a larger beamforming system. It is envisaged that there will be more than 8000 of these signal processing platforms in the first phase of the SKA, so particular attention has been devoted to ensure the design is low-cost and low-power. This paper describes the main features of the data acquisition unit of such a platform and presents preliminary results characterizing its performance.
Most planetary radar applications require recording of complex voltages at sampling rates of up to 20MHz. I describe the design and implementation of a sampling system that has been installed at the Arecibo Observatory, Goldstone Solar System Radar, and Green Bank Telescope. After many years of operation, these data-taking systems have enabled the acquisition of hundreds of datasets, many of which still await publication.
Owing to the improvement of computing power, computer-aided multi-unit acquisition and separation rapidly proliferated during the last two decades. To utilize this technology is, however, not easy since task-specific designs are usually mandatory. To overcome this obstacle, a commercial data acquisition system was used to evaluate whether it is possible to accomplish the task of multi-unit acquisition without the aid of these devices. As the results shown that the technique of spike-trigger acquisition can provide the capability to reduce the data amount to 2% while compared with that sampled by continuous acquisition. However, the interval between two consecutive spikes cannot be shorter than 5 ms. This suggests that special designed devices are still necessary.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.