World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

Gesture Recognition Based on Kinect v2 and Leap Motion Data Fusion

    https://doi.org/10.1142/S021800141955005XCited by:5 (Source: Crossref)

    This study proposed a method for multiple motion-sensitive devices (i.e. one Kinect v2 and two Leap Motions) to integrate gesture data in Unity. Other depth cameras could replace the Kinect. The general steps in integrating gesture data for motion-sensitive devices were introduced as follows. (1) A method was proposed to recognize the fingertip from depth images for the Kinect v2. (2) Coordinates observed by three motion-sensitive devices were aligned in space in three steps. First, preliminary coordinate conversion parameters were obtained through joint calibration of the three devices. Second, two types of devices were approached to the observed value of the standard Leap Motion by the least squares method twice (i.e. one Kinect and one Leap Motion on the first round, then two Leap Motions on the second round). (3) Data of the three devices were aligned with time by using Unity while applying the data plan. On this basis, a human hand interacted with a virtual object in Unity. Experimental results demonstrated that the proposed method had a small recognition error of hand joints and realized the natural interaction between the human hand and virtual objects.