Gesture Recognition Based on Kinect v2 and Leap Motion Data Fusion
Abstract
This study proposed a method for multiple motion-sensitive devices (i.e. one Kinect v2 and two Leap Motions) to integrate gesture data in Unity. Other depth cameras could replace the Kinect. The general steps in integrating gesture data for motion-sensitive devices were introduced as follows. (1) A method was proposed to recognize the fingertip from depth images for the Kinect v2. (2) Coordinates observed by three motion-sensitive devices were aligned in space in three steps. First, preliminary coordinate conversion parameters were obtained through joint calibration of the three devices. Second, two types of devices were approached to the observed value of the standard Leap Motion by the least squares method twice (i.e. one Kinect and one Leap Motion on the first round, then two Leap Motions on the second round). (3) Data of the three devices were aligned with time by using Unity while applying the data plan. On this basis, a human hand interacted with a virtual object in Unity. Experimental results demonstrated that the proposed method had a small recognition error of hand joints and realized the natural interaction between the human hand and virtual objects.