World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×
0 cover

Call for Papers

Special Issue on State-of-the-Art Human Action Recognition Systems Using Multimodal Sensor Data Fusion

DESCRIPTION

The process of merging several data streams to produce information that is easier to comprehend or use is known as multimodal data fusion, or MMDF. gathered information from an accelerometer in a lab setting to measure tremor activity and identify movement and posture. Since then, inertial sensor-based systems have frequently used accelerometers, gyroscopes, and other sensors to study human activity. The technique of combining data from several sensors is known as "sensor fusion," and it is used to lessen the amount of uncertainty that could be present when a robot is executing a task or navigating. In order for the robot to travel and behave more successfully, sensor fusion aids in the creation of a more realistic world model. In the context of Human Activity Recognition (HAR), CNNs have been used to automatically and consistently identify and categorise human actions from sensor data. Time-series data collected by sensors is frequently used as the input data for HAR using CNNs. More effective perception and interaction between robots and their environment is made possible by the combination of data from many sensor types, including ultrasonic sensors, LIDAR, cameras, and inertial measurement units (IMUs).

Drone systems are among the best instances of sensor fusion in robotics. There are various ways in which sensor fusion can improve the durability and performance of robotic systems. Firstly, by taking advantage of their complementary strengths and offsetting their deficiencies, it can lower the noise and uncertainty of individual sensor measurements. The combined examination of several linked datasets that offer contrasting perspectives on the same phenomenon is known as data fusion. In general, more accurate conclusions can be drawn from the process of correlating and combining data from several sources than from the examination of a single dataset. The process of merging data from several sources into a single database is known as data fusion, whereas sensor fusion is the process of combining information from sensors to create a more accurate and coherent picture of the world. Goals. The goal of human activity recognition (HAR) is to categorise an individual's movements using a range of sensor-captured measurements. These days, gathering this kind of data is not a difficult undertaking. With the proliferation of the Internet of Things, nearly everyone is carrying around a device that tracks their whereabouts.

Classifying human activities from video frames is a crucial task in computer vision, known as action recognition. Consider it the analogous of image categorization for videos. For videos, action recognition is equivalent to image classification. The high-pass filtering based approach is another significant spatial domain fusion technique. Here, an upsampled version of the MS pictures is injected with high frequency information. Assigning tasks to one of the six completed activities is the goal. Replanting forests and natural preserves to preserve biodiversity, as well as utilising sustainable energy sources, are examples of positive human impacts on the environment. Certain ecosystems have been able to survive or endure longer thanks to habitat protection and even protected growth initiatives for endangered species. Articles are invited that explore State-of-the-Art Human Action Recognition Systems Using Multimodal Sensor Data Fusion. Case studies and practitioner perspectives are also welcome.

LIST OF TOPICS

  • Machine learning-based multisensor information fusion for practical uses in human activity detection.
  • Multidomain multimodal fusion for human action recognition using inertial sensors.
  • Identification of human actions from many data modalities.
  • A review of depth and inertial sensor fusion for human action identification.
  • Assessing RGB-D and inertial sensor integration for multimodal human action detection.
  • Multimodal feature-level fusion for robust human activity recognition.
  • Combining many sensor inputs to identify human actions in a useful way for medical platforms.
  • Combining vision and inertial sensing to recognize human actions.
  • Convolutional neural networks are used as tools to recognize human activities.
  • A temporal order modelling method for multimodal sensor data-based human activity recognition.
  • Indoor action recognition through multimodal fusion via the teacher-student network.
  • Utilising inertial sensor data to encode images in order to recognize human actions.

SUGGESTED TIMELINE

  • Manuscript submissions due (25.05.2025)
  • First round of reviews completed (20.07.2025)
  • Revised manuscripts due (10.08.2025)
  • Second round of reviews completed (20.10.2025)
  • Final manuscripts due (10.11.2025)

GUEST EDITOR DETAILS

Dr. Jawad Khan
Assistant Professor
Gachon University, Seongnam, South Korea
Email: jkhanbk1@gachon.ac.kr, prof.jawadkhan@gmail.com
Google Scholar: https://scholar.google.com/citations?user=BWpBBh0AAAAJ&hl=en
Research Gate: https://www.researchgate.net/profile/Jawad-Khan-21
ORCID 0000-0001-8263-7213

Dr. Muhammad Hameed Siddiqi
Associate Professor
Jouf University, Sakaka, Aljouf, Saudi Arabia
Email: mhsiddiqi@ju.edu.sa
Google Scholar: https://scholar.google.co.uk/citations?user=PktU0eEAAAAJ&hl=en
Research Gate Link: https://www.researchgate.net/profile/Muhammad-Siddiqi-7
ORCID 0000-0002-4370-8012

Dr. Tariq Rahim
Lecturer
Kingston University, Kingston, England
Email: t.rahim@kingston.ac.uk
Google Scholar: https://scholar.google.com/citations?user=fr4C9ogAAAAJ&hl=en
ResearchGate: https://www.researchgate.net/profile/Tariq-Rahim-2
ORCID 0000-0001-7817-9715

Dr. Shah Khalid
Assistant Professor
National University of Sciences & Technology, Islamabad, Pakistan
Email: shah.khalid@seecs.edu.pk
Google Scholar: https://scholar.google.com/citations?user=Sff9RyoAAAAJ&hl=en
ResearchGate: https://www.researchgate.net/profile/Shah-Khalid-13
ORCID 0000-0001-5735-5863