论文标题

FS-HGR:通过肌电图识别手势识别的很少的学习

FS-HGR: Few-shot Learning for Hand Gesture Recognition via ElectroMyography

论文作者

Rahimian, Elahe, Zabihi, Soheil, Asif, Amir, Farina, Dario, Atashzar, Seyed Farokh, Mohammadi, Arash

论文摘要

这项工作是由于深度神经网络(DNN)的最新进展及其在人机界面中的广泛应用所致。 DNN最近已用于通过处理表面肌电图(SEMG)信号来检测预期的手势。这些方法的最终目标是实现假肢的高性能控制器。但是,尽管当大量数据可用于训练时,DNN比常规方法表现出优于常规方法,但是当数据受到限制时,其性能会大大降低。在研究实验室中收集大型培训数据集可能是可行的,但这不是现实生活中的实际方法。因此,对于设计依赖于最小训练数据的同时提供高精度的现代手势检测技术的设计尚未满足。在这里,我们提出了一个基于元学习的制定(称为FS-HGR)的创新和新颖的“少数学习”框架,以满足这一需求。很少有射击学习是域适应的一种变体,其目的是基于一个或几个培训示例来推断所需的输出。更具体地说,提出的FS-HGR在看到每个班级的示例很少后迅速概括。提出的方法在新的重复次数(5次观察5次)中提高了85.94%的分类准确性,对新受试者的精度为81.29%,几乎没有射击观测(5-tay 5-tay 5-shot),而新的手势的精度为73.36%,而新姿态的精度为73.36%,而新的手势很少,几乎没有观察(5-tay 5-fay 5-hot 5-hot)。

This work is motivated by the recent advances in Deep Neural Networks (DNNs) and their widespread applications in human-machine interfaces. DNNs have been recently used for detecting the intended hand gesture through processing of surface electromyogram (sEMG) signals. The ultimate goal of these approaches is to realize high-performance controllers for prosthetic. However, although DNNs have shown superior accuracy than conventional methods when large amounts of data are available for training, their performance substantially decreases when data are limited. Collecting large datasets for training may be feasible in research laboratories, but it is not a practical approach for real-life applications. Therefore, there is an unmet need for the design of a modern gesture detection technique that relies on minimal training data while providing high accuracy. Here we propose an innovative and novel "Few-Shot Learning" framework based on the formulation of meta-learning, referred to as the FS-HGR, to address this need. Few-shot learning is a variant of domain adaptation with the goal of inferring the required output based on just one or a few training examples. More specifically, the proposed FS-HGR quickly generalizes after seeing very few examples from each class. The proposed approach led to 85.94% classification accuracy on new repetitions with few-shot observation (5-way 5-shot), 81.29% accuracy on new subjects with few-shot observation (5-way 5-shot), and 73.36% accuracy on new gestures with few-shot observation (5-way 5-shot).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源