论文标题
预测脑电图的视频功能,反之亦然
Predicting Video features from EEG and Vice versa
论文作者
论文摘要
在本文中,我们探讨了使用深度学习模型从脑电图(EEG)功能(EEG)功能(EEG)功能的预测面部或唇部视频框架中预测EEG功能的eeg功能。要求受试者在计算机屏幕上读出向他们显示的大声英语句子,并记录了他们的同时脑电图和面部视频框架。我们的模型能够从输入EEG功能中生成面部或唇部视频框架的非常广泛的特征。我们的结果表明,从录制的脑电图功能中综合高质量面部或唇部视频的第一步。我们展示了由七个主题组成的数据集的结果。
In this paper we explore predicting facial or lip video features from electroencephalography (EEG) features and predicting EEG features from recorded facial or lip video frames using deep learning models. The subjects were asked to read out loud English sentences shown to them on a computer screen and their simultaneous EEG signals and facial video frames were recorded. Our model was able to generate very broad characteristics of the facial or lip video frame from input EEG features. Our results demonstrate the first step towards synthesizing high quality facial or lip video from recorded EEG features. We demonstrate results for a data set consisting of seven subjects.