Please login to be able to save your searches and receive alerts for new content matching your search criteria.
A reliable neural-machine interface is essential for humans to intuitively interact with advanced robotic hands in an unconstrained environment. Existing neural decoding approaches utilize either discrete hand gesture-based pattern recognition or continuous force decoding with one finger at a time. We developed a neural decoding technique that allowed continuous and concurrent prediction of forces of different fingers based on spinal motoneuron firing information. High-density skin-surface electromyogram (HD-EMG) signals of finger extensor muscle were recorded, while human participants produced isometric flexion forces in a dexterous manner (i.e. produced varying forces using either a single finger or multiple fingers concurrently). Motoneuron firing information was extracted from the EMG signals using a blind source separation technique, and each identified neuron was further classified to be associated with a given finger. The forces of individual fingers were then predicted concurrently by utilizing the corresponding motoneuron pool firing frequency of individual fingers. Compared with conventional approaches, our technique led to better prediction performances, i.e. a higher correlation (0.71±0.11 versus 0.61±0.09), a lower prediction error (5.88±1.34% MVC versus 7.56±1.60% MVC), and a higher accuracy in finger state (rest/active) prediction (88.10±4.65% versus 80.21±4.32%). Our decoding method demonstrated the possibility of classifying motoneurons for different fingers, which significantly alleviated the cross-talk issue of EMG recordings from neighboring hand muscles, and allowed the decoding of finger forces individually and concurrently. The outcomes offered a robust neural-machine interface that could allow users to intuitively control robotic hands in a dexterous manner.
Cross-user variability is a well-known challenge that leads to severe performance degradation and impacts the robustness of practical myoelectric control systems. To address this issue, a novel method for myoelectric recognition of finger movement patterns is proposed by incorporating a neural decoding approach with unsupervised domain adaption (UDA) learning. In our method, the neural decoding approach is implemented by extracting microscopic features characterizing individual motor unit (MU) activities obtained from a two-stage online surface electromyogram (SEMG) decomposition. A specific deep learning model is designed and initially trained using labeled data from a set of existing users. The model can update adaptively when recognizing the movement patterns of a new user. The final movement pattern was determined by a fuzzy weighted decision strategy. SEMG signals were collected from the finger extensor muscles of 15 subjects to detect seven dexterous finger-movement patterns. The proposed method achieved a movement pattern recognition accuracy of (93.94±1.54)% over seven movements under cross-user testing scenarios, much higher than that of the conventional methods using global SEMG features. Our study presents a novel robust myoelectric pattern recognition approach at a fine-grained MU level, with wide applications in neural interface and prosthesis control.
In this study, specific changes in electromyographic characteristics of individual motor units (MUs) associated with different muscle contraction forces are investigated using multi-channel surface electromyography (SEMG). The gradient convolution kernel compensation (GCKC) algorithm is employed to separate individual MUs from their surface interferential electromyography (EMG) signals and provide the discharge instants, which is later used in the spike-triggered averaging (STA) techniques to obtain the complete waveform. The method was tested on experimental SEMG signals acquired during constant force contractions of biceps brachii muscles in five subjects. Electromyographic characteristics including the recruitment number, waveform amplitude, discharge pattern and innervation zone (IZ) are studied. Results show that changes in the action potential of single MU with different contraction force levels are consistent with those for all MUs, and that the amplitude of MU action potentials (MUAPs) provides a useful estimate of the muscle contraction forces.
Assistive technology allows motor-impaired people to overcome limitations. Several myoelectric interfaces have been developed, however, there is no reported study employing information at a motor unit (MU) level for controlling purposes. Thus, we developed a facial myoelectric interface operating at the level of MU for controlling a computer screen cursor. Data were collected from 11 able-bodied and 1 tetraplegic subjects. Different from traditional approaches, there was no significant difference (p<0.05) in learning with respect to the level of difficulty, occurring evenly and faster. Information at MU level opens new possibilities for the development of fine control myoelectric interfaces.