论文标题
用于机器人控制的语音和EMG数据的多模式数据融合
Multi-modal data fusion of Voice and EMG data for Robotic Control
论文作者
论文摘要
可穿戴的电子设备正在不断发展,并且正在增加人类与技术的融合。这些灵活且可弯曲的设备有各种形式,可以测量人体的生理和肌肉变化,并可以使用这些信号来机器控制。 Myo手势带(其中一种设备)使用肌电信号捕获了肌电图数据(EMG),并通过一些预定的手势将其翻译为将其用作输入信号。在多模式环境中使用该设备不仅会增加可以在此类设备帮助下完成的工作类型,而且还将有助于提高执行任务的准确性。本文分别介绍了通过麦克风和myo频段捕获的输入方式(例如语音和肌电信号)的融合,以控制机器人的臂。还提出了获得的实验结果及其性能分析的精度。
Wearable electronic equipment is constantly evolving and is increasing the integration of humans with technology. Available in various forms, these flexible and bendable devices sense and can measure the physiological and muscular changes in the human body and may use those signals to machine control. The MYO gesture band, one such device, captures Electromyography data (EMG) using myoelectric signals and translates them to be used as input signals through some predefined gestures. Use of this device in a multi-modal environment will not only increase the possible types of work that can be accomplished with the help of such device, but it will also help in improving the accuracy of the tasks performed. This paper addresses the fusion of input modalities such as speech and myoelectric signals captured through a microphone and MYO band, respectively, to control a robotic arm. Experimental results obtained as well as their accuracies for performance analysis are also presented.