DESCRIPTION
The process of merging several data streams to produce information that is easier to comprehend or use is known as multimodal data fusion, or MMDF. gathered information from an accelerometer in a lab setting to measure tremor activity and identify movement and posture. Since then, inertial sensor-based systems have frequently used accelerometers, gyroscopes, and other sensors to study human activity. The technique of combining data from several sensors is known as "sensor fusion," and it is used to lessen the amount of uncertainty that could be present when a robot is executing a task or navigating. In order for the robot to travel and behave more successfully, sensor fusion aids in the creation of a more realistic world model. In the context of Human Activity Recognition (HAR), CNNs have been used to automatically and consistently identify and categorise human actions from sensor data. Time-series data collected by sensors is frequently used as the input data for HAR using CNNs. More effective perception and interaction between robots and their environment is made possible by the combination of data from many sensor types, including ultrasonic sensors, LIDAR, cameras, and inertial measurement units (IMUs).
Drone systems are among the best instances of sensor fusion in robotics. There are various ways in which sensor fusion can improve the durability and performance of robotic systems. Firstly, by taking advantage of their complementary strengths and offsetting their deficiencies, it can lower the noise and uncertainty of individual sensor measurements. The combined examination of several linked datasets that offer contrasting perspectives on the same phenomenon is known as data fusion. In general, more accurate conclusions can be drawn from the process of correlating and combining data from several sources than from the examination of a single dataset. The process of merging data from several sources into a single database is known as data fusion, whereas sensor fusion is the process of combining information from sensors to create a more accurate and coherent picture of the world. Goals. The goal of human activity recognition (HAR) is to categorise an individual's movements using a range of sensor-captured measurements. These days, gathering this kind of data is not a difficult undertaking. With the proliferation of the Internet of Things, nearly everyone is carrying around a device that tracks their whereabouts.
Classifying human activities from video frames is a crucial task in computer vision, known as action recognition. Consider it the analogous of image categorization for videos. For videos, action recognition is equivalent to image classification. The high-pass filtering based approach is another significant spatial domain fusion technique. Here, an upsampled version of the MS pictures is injected with high frequency information. Assigning tasks to one of the six completed activities is the goal. Replanting forests and natural preserves to preserve biodiversity, as well as utilising sustainable energy sources, are examples of positive human impacts on the environment. Certain ecosystems have been able to survive or endure longer thanks to habitat protection and even protected growth initiatives for endangered species. Articles are invited that explore State-of-the-Art Human Action Recognition Systems Using Multimodal Sensor Data Fusion. Case studies and practitioner perspectives are also welcome.
LIST OF TOPICS
SUGGESTED TIMELINE
GUEST EDITOR DETAILS
Dr. Jawad Khan
Assistant Professor
Gachon University, Seongnam, South Korea
Email: jkhanbk1@gachon.ac.kr, prof.jawadkhan@gmail.com
Google Scholar: https://scholar.google.com/citations?user=BWpBBh0AAAAJ&hl=en
Research Gate: https://www.researchgate.net/profile/Jawad-Khan-21 0000-0001-8263-7213
Dr. Muhammad Hameed Siddiqi
Associate Professor
Jouf University, Sakaka, Aljouf, Saudi Arabia
Email: mhsiddiqi@ju.edu.sa
Google Scholar: https://scholar.google.co.uk/citations?user=PktU0eEAAAAJ&hl=en
Research Gate Link: https://www.researchgate.net/profile/Muhammad-Siddiqi-7 0000-0002-4370-8012
Dr. Tariq Rahim
Lecturer
Kingston University, Kingston, England
Email: t.rahim@kingston.ac.uk
Google Scholar: https://scholar.google.com/citations?user=fr4C9ogAAAAJ&hl=en
ResearchGate: https://www.researchgate.net/profile/Tariq-Rahim-2 0000-0001-7817-9715
Dr. Shah Khalid
Assistant Professor
National University of Sciences & Technology, Islamabad, Pakistan
Email: shah.khalid@seecs.edu.pk
Google Scholar: https://scholar.google.com/citations?user=Sff9RyoAAAAJ&hl=en
ResearchGate: https://www.researchgate.net/profile/Shah-Khalid-13 0000-0001-5735-5863