Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • chapterNo Access

    SENSOR FUSION- SONAR AND STEREO VISION, USING OCCUPANCY GRIDS AND SIFT

    The main contribution of this paper is to present a sensor fusion approach to scene environment mapping as part of a SDF (Sensor Data Fusion) architecture. This approach involves combined sonar and stereo vision readings. Sonar readings are interpreted using probability density functions to the occupied and empty regions. SIFT (Scale Invariant Feature Transform) feature descriptors are interpreted using gaussian probabilistic error models. The use of occupancy grids is proposed for representing the sonar as well as the features descriptors readings. The Bayesian estimation approach is applied to update the sonar and the SIFT descriptors' uncertainty grids. The sensor fusion yields a significant reduction in the uncertainty of the occupancy grid compared to the individual sensor readings.

  • chapterNo Access

    Toward Multimodal Human–Computer Interface

    Recent advances in various signal-processing technologies, coupled with an explosion in the available computing power, have given rise to a number of novel human–computer interaction (HCI) modalities—speech, vision-based gesture recognition, eye tracking, electroencephalograph, etc. Successful embodiment of these modalities into an interface has the potential of easing the HCI bottleneck that has become noticeable with the advances in computing and communication. It has also become increasingly evident that the difficulties encountered in the analysis and interpretation of individual sensing modalities may be overcome by integrating them into a multimodal human–computer interface.

    In this paper, we examine several promising directions toward achieving multimodal HCI. We consider some of the emerging novel input modalities for HCI and the fundamental issues in integrating them at various levels—from early "signal" level to intermediate "feature" level to late "decision" level. We discuss the different computational approaches that may be applied at the different levels of modality integration. We also briefly review several demonstrated multimodal HCI systems and applications. Despite all the recent developments, it is clear that further research is needed for interpreting and fusing multiple sensing modalities in the context of HCI. This research can benefit from many disparate fields of study that increase our understanding of the different human communication modalities and their potential role in HCI.

  • chapterNo Access

    VALIDATION OF A KNEE ANGLE MEASUREMENT SYSTEM BASED ON IMUS

    Inertial Measurements Unit (IMU) based systems are a purposeful and alternative tool to monitor human gait mainly because they are cheaper, smaller and can be used without space restrictions compared to other gait analysis methods. In the scientific community, there are well-known studies that test the accuracy and efficiency of this method compared to ground truth systems. Gait parameters such as stride length, distance, velocity, cadence, gait phases duration and detection, or joint angles are tested and validated in these studies in order to study and improve this technology. In this article, knee joint angles were calculated from IMUs’ data and they were compared with DARwIn OP knee joint angles. IMUs were attached to the left leg of the robot and left knee flexion-extension (F-E) was evaluated. The RMSE values were less than 6° when DARwIn OP was walking, and less than 5° when the robot kept the left leg stretched and performed an angle of -30°.

  • chapterNo Access

    ESTIMATION OF THE TRUNK ATTITUDE OF A HUMANOID BY DATA FUSION OF INERTIAL SENSORS AND JOINT ENCODERS

    The major problem associated with the walking of humanoid robots is to maintain its dynamic equilibrium while walking. To achieve this one must detect gait instability during walking to apply proper fall avoidance schemes and bring back the robot into stable equilibrium. A good approach to detect gait instability is to study the evolution of the attitude of the humanoid’s trunk. Most attitude estimation techniques involve using the information from inertial sensors positioned at the trunk. However, inertial sensors like accelerometer and gyro are highly prone to noise which lead to poor attitude estimates that can cause false fall detections and falsely trigger fall avoidance schemes. In this paper we present a novel way to access the information from joint encoders present in the legs and fuse it with the information from inertial sensors to provide a highly improved attitude estimate during humanoid walk. Also if the joint encoders’ attitude measure is compared separately with the IMU’s attitude estimate then it is observed that they are different when there is a change of contact between the stance leg and the ground. This may be used to detect a loss of contact and can be verified by the information from force sensors present at the feet of the robot. The propositions are validated by experiments performed on humanoid robot NAO.