Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In nonimaging IR seekers, the received target radiation on the IR detector is modulated via a reticle and produces the information signal (IS). The IS contains the tracking error signal (TES), which is proportional with the target position. TES is used in the control and optic section in the missile. The main task is to extract the TES from IS. The accuracy of TES extraction may be affected by several items such as noise from engines. In this paper, we used for the first time this field Square Root Unscented Kalman Filter (SRUKF) and Extended Kalman Filter (EKF) to estimate the TES from the IS for wagon wheel reticle.
Due to the high computational complexity of these algorithms, their execution in real time is not an easy task especially if there is space limitation for hardware. By using the minicomputer, such as Raspberry Pi 3 Model B+ platform, the task can be done.
The results showed that the SRUKF presented the best phase estimation for TES. The implementation by using Raspberry Pi was in real time because all algorithm executions for one period was less than 5ms, this time in our problem is less than strict timing window.
Inertial navigation system (INS) is often integrated with satellite navigation systems to achieve the required precision at high-speed applications. In global navigation system (GPS)/INS integration systems, GPS outages are unavoidable and a severe challenge. Moreover, because of the usage of low-cost microelectromechanical sensors (MEMS) with noisy outputs, the INS will get diverged during GPS outages, and that is why navigation precision severely decreases in commercial applications. In this paper, we improve GPS/INS integration system during GPS outages using extended Kalman filter (EKF) and artificial intelligence (AI) together. In this integration algorithm, the AI receives the angular rates and specific forces from the inertial measurement unit (IMU) and velocity from the INS at t and t−1. Therefore, the AI has positioning and timing data of the INS. While the GPS signals are available, the output of the AI is compared with the GPS increment; so that the AI is trained. During GPS outages, the AI will practically play the GPS role. Thus, it can prevent the divergence of the GPS/INS integration system in GPS-denied environments. Furthermore, we utilize neural networks (NNs) as an AI module in five different types: multi-layer perceptron (MLP) NN, radial basis function (RBF) NN, wavelet NN, support vector regression (SVR) and adaptive neuro-fuzzy inference system (ANFIS). To evaluate the proposed approach, we utilize a real dataset that has been gathered by a mini-airplane. The results demonstrate that the proposed approach outperforms the INS and GPS/INS integration systems with the EKF during GPS outages. Meanwhile, the ANFIS also reached more than 47.77% precision compared to the traditional method.
Battery Management System (BMS) functions to monitor individual cell in a battery pack and its crucial task is to maintain stability throughout the battery pack. The BMS is responsible for maintaining the safety of the battery as well as not to harm the user or environment. The parameters that are to be monitored in a battery are Voltage, Current and Temperature. With the collected data, BMS carefully monitors the charging–discharging behavior of the battery particularly in the Lithium-ion (Li-ion) batteries in which charging and discharging behavior are completely different. This paper proposes a real-time IOT connected deep learning algorithm for estimation of State-of-Charge (SoC) of Li-ion batteries. This paper provides unique objectives and congruence between model-based conventional methods and state-of-the-art deep learning algorithm, specifically Feed Forward Neural Network (FNN) which is nonRecurrent. This paper also highlights the advantages of Internet-of-Things (IoT) connected deep learning algorithm for estimation of State-of-Charge of Li-ion batteries in Hybrid Electric Vehicles (HEVs) and Electric Vehicles (EVs). The major advantage of the proposed method is that the Artificial Intelligence (AI)-based techniques aim to bring the estimation error less than 2% at a low cost and time without the model of the battery, at par with conventional method of Extended Kalman Filter (EKF) which is the best ever practical estimation theory. Another advantage of the proposed method is that in an abnormal condition (i.e., Unsafe Temperature) the IF This Then That (IFTTT) IoT mobile application interfaced with BMS through ThingSpeak cloud, sends a notification alert to the battery expert or to the user prior to an emergency. Finally, the real-time data of the battery parameters are collected through ThingSpeak cloud platform for future research and analysis.
Micro unmanned aerial vehicles (UAVs) are promising to play more and more important roles in both civilian and military activities. Currently, the navigation of UAVs is critically dependent on the localization service provided by the Global Positioning System (GPS), which suffers from the multipath effect and blockage of line-of-sight, and fails to work in an indoor, forest or urban environment. In this paper, we establish a localization system for quadcopters based on ultra-wideband (UWB) range measurements. To achieve the localization, a UWB module is installed on the quadcopter to actively send ranging requests to some fixed UWB modules at known positions (anchors). Once a distance is obtained, it is calibrated first and then goes through outlier detection before being fed to a localization algorithm. The localization algorithm is initialized by trilateration and sustained by the extended Kalman filter (EKF). The position and velocity estimates produced by the algorithm will be further fed to the control loop to aid the navigation of the quadcopter. Various flight tests in different environments have been conducted to validate the performance of UWB ranging and localization algorithm.
An effective reactive collision avoidance algorithm is presented in this paper for unmanned aerial vehicles (UAVs) using two simple inexpensive pinhole cameras. The vision sensed data, which consists of the azimuth and elevation angles at the two camera positions, is first processed through a Kalman filter formulation to estimate the position and velocity of the obstacle. Once the obstacle position is estimated, the collision cone philosophy is used to predict the collision over a short period of time. In case a collision is predicted, steering guidance commands are issued to the vehicle to steer its velocity vector away using the nonlinear differential geometric guidance. A new cubic spline based post-avoidance merging algorithm is also presented so that the vehicle rejoins the intended global path quickly in a smooth manner after avoiding the obstacle. The overall algorithm has been validated using the point mass model of a prototype UAV with first-order autopilot delay. Both extended Kalman filtering (EKF) and unscented Kalman filtering (UKF) have been experimented. Both are found to be quite effective. However, performance of UKF was found to be better than EKF with minor compromise in computational efficiency and hence it can be a better choice. Note that because of two cameras, stereovision signature gets associated with optical flow signature thereby making the overall signature quite strong for obstacle position estimation. This leads to a good amount of success as compared to the usage of a single pinhole camera, results of which has been published earlier.
In this paper, we propose a novel method for mobile robot localization and navigation based on multispectral visual odometry (MVO). The proposed approach consists in combining visible and infrared images to localize the mobile robot under different conditions (day, night, indoor and outdoor). The depth image acquired by the Kinect sensor is very sensitive for IR luminosity, which makes it not very useful for outdoor localization. So, we propose an efficient solution for the aforementioned Kinect limitation based on three navigation modes: indoor localization based on RGB/depth images, night localization based on depth/IR images and outdoor localization using multispectral stereovision RGB/IR. For automatic selection of the appropriate navigation modes, we proposed a fuzzy logic controller based on images’ energies. To overcome the limitation of the multimodal visual navigation (MMVN) especially during navigation mode switching, a smooth variable structure filter (SVSF) is implemented to fuse the MVO pose with the wheel odometry (WO) pose based on the variable structure theory. The proposed approaches are validated with success experimentally for trajectory tracking using the mobile robot (Pioneer P3-AT).