广西师范大学学报(自然科学版) ›› 2025, Vol. 43 ›› Issue (3): 43-56.doi: 10.16088/j.issn.1001-6600.2024092401

• 智能信息处理 • 上一篇    下一篇

基于复数协方差卷积神经网络的运动想象脑电信号解码方法

黄仁慧, 张锐锋, 文晓浩, 闭金杰, 黄守麟*, 李廷会   

  1. 广西师范大学电子与信息工程学院/集成电路学院,广西桂林 541004
  • 收稿日期:2024-09-24 修回日期:2024-10-31 出版日期:2025-05-05 发布日期:2025-05-14
  • 通讯作者: 黄守麟(1982—),男,广西宾阳人,广西师范大学副教授,博士。E-mail:hsl5167@gxnu.edu.cn
  • 基金资助:
    国家自然科学基金(62466006);广西科技计划青年创新人才科研专项(桂科AD23026245)

Complex-value Covariance-based Convolutional Neural Network for Decoding Motor Imagery-based EEG Signals

HUANG Renhui, ZHANG Ruifeng, WEN Xiaohao, BI Jinjie, HUANG Shoulin*, LI Tinghui   

  1. School of Electronics and Information Engineering/School of Integrated Circuits, Guangxi Normal University, Guilin Guangxi 541004, China
  • Received:2024-09-24 Revised:2024-10-31 Online:2025-05-05 Published:2025-05-14

摘要: 深度挖掘和利用脑电信号的特征信息,以提高运动想象的分类性能,一直是脑机接口的研究热点。考虑到脑电特征空间具有高维性且与幅值和相位密切相关,如何有效表达和同时利用脑电的幅值和相位信息已经成为一个难题。为此,本研究提出一种基于复数协方差特征的三维复值卷积神经网络。首先,构建脑电不同频率下的复数协方差矩阵特征,不仅通过复值表示将幅值和相位信息结合在一起,并且保留分类所需的多变量信息,如幅值、相位、空间位置、频率等。其次,设计针对多复数协方差特征的全复数卷积神经网络,实现运动想象任务的高性能分类。在2个公开数据集上的实验结果表明,本研究提出的方法可获得比现有前沿方法至少高出2.49和1.85个百分点的平均准确率。

关键词: 脑电信号, 脑机接口, 幅相信息融合, 复数协方差特征, 复值卷积神经网络, 信息交互

Abstract: To improve the classification performance of motor imagery (MI) tasks by deeply mining and using the characteristic information of electroencephalogram (EEG) signals has always been the focus of brain-computer interfaces (BCI) research. Because EEG feature space is highly dimensional and directly related to both amplitude and phase of EEG signals, how to simultaneously represent and utilize the information contained in amplitude and phase has become a difficult issue. To address this issue, a three-dimensional complex convolutional neural network based on complex-value covariance features is proposed. Firstly, complex-value covariance matrices related to different frequencies as MI-based EEG features is constructed. As a result, complex value can combine the amplitude and phase information of EEG signals together. Moreover, the covariance matrices can preserve multivariate information such as amplitude, phase, spatial locations, frequency, etc. required for classification. Secondly, a full complex convolutional neural network is designed for learning the covariance features and thus achieving high performance classification. Experimental results on two publicly available datasets show that the proposed method can achieve mean accuracies that are at least 2.49 and 1.85 percentage points higher than state-of-the-art methods.

Key words: electroencephalogram, brain-computer interface, fusion of amplitude and phase information, complex covariance features, complex-valued convolutional neural network, information interactive

中图分类号:  TN911.7

[1] LAZAROU I, NIKOLOPOULOS S, PETRANTONAKIS P C, et al. EEG-based brain-computer interfaces for communication and rehabilitation of people with motor impairment: a novel approach of the 21st Century[J]. Frontiers in Human Neuroscience, 2018, 12: 14. DOI: 10.3389/fnhum.2018.00014.
[2]邱爽,杨帮华,陈小刚,等.非侵入式脑—机接口编解码技术研究进展[J].中国图象图形学报,2023,28(6):1543-1566. DOI: 10.11834/jig.230031.
[3]ZHANG J C, WANG M. A survey on robots controlled by motor imagery brain-computer interfaces[J]. Cognitive Robotics, 2021, 1: 12-24. DOI: 10.1016/j.cogr.2021.02.001.
[4]PFURTSCHELLER G, LOPES DA SILVA F H. Event-related EEG/MEG synchronization and desynchronization: basic principles[J]. Clinical Neurophysiology, 1999, 110(11): 1842-1857. DOI: 10.1016/S1388-2457(99)00141-8.
[5]ALTAHERI H, MUHAMMAD G, ALSULAIMAN M, et al. Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: a review[J]. Neural Computing and Applications, 2023, 35(20): 14681-14722. DOI: 10.1007/s00521-021-06352-5.
[6]BLANKERTZ B, TOMIOKA R, LEMM S, et al. Optimizing spatial filters for robust EEG single-trial analysis[J]. IEEE Signal Processing Magazine, 2008, 25(1): 41-56. DOI: 10.1109/MSP.2008.4408441.
[7]潘林聪,王坤,许敏鹏,等.面向运动意图解码的共空间模式及其扩展算法研究综述[J].中国生物医学工程学报,2022,41(5):577-588. DOI: 10.3969/j.issn.0258-8021.2022.05.007.
[8]ANG K K, CHIN Z Y, ZHANG H H, et al. Filter bank common spatial pattern (FBCSP) in brain-computer interface[C]// 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence). Los Alamitos, CA: IEEE Computer Society, 2008: 2390-2397. DOI: 10.1109/IJCNN.2008.4634130.
[9]KOLDA T G, BADER B W. Tensor decompositions and applications[J]. SIAM Review, 2009, 51(3): 455-500. DOI: 10.1137/07070111X.
[10]BARACHANT A, BONNET S, CONGEDO M, et al. Riemannian geometry applied to BCI classification[C]// Latent Variable Analysis and Signal Separation. Berlin: Springer, 2010: 629-636. DOI: 10.1007/978-3-642-15995-4_78.
[11]徐慧,何宏,张慧敏,等.黎曼流形切平面空间中运动想象脑电信号的迁移学习[J].中国生物医学工程学报,2023,42(6):659-667. DOI: 10.3969/j.issn.0258-8021.2023.06.003.
[12]ROSIPAL R, ROŠTŤÁKOVÁ Z, TREJO L J. Tensor decomposition of human narrowband oscillatory brain activity in frequency, space and time[J]. Biological Psychology, 2022, 169: 108287. DOI: 10.1016/j.biopsycho.2022.108287.
[13]黄守麟.面向运动想象脑电信号的张量空频分析方法研究[D].哈尔滨:哈尔滨工业大学,2021. DOI: 10.27061/d.cnki.ghgdu.2021.004962.
[14]TANG X L, YANG C Q, SUN X, et al. Motor imagery EEG decoding based on multi-scale hybrid networks and feature enhancement[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 1208-1218. DOI: 10.1109/TNSRE.2023.3242280.
[15]SCHIRRMEISTER R T, SPRINGENBERG J T, FIEDERER L D J, et al. Deep learning with convolutional neural networks for EEG decoding and visualization[J]. Human Brain Mapping, 2017, 38(11): 5391-5420. DOI: 10.1002/hbm.23730.
[16]LAWHERN V J, SOLON A J, WAYTOWICH N R, et al. EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces[J]. Journal of Neural Engineering, 2018, 15(5): 056013. DOI: 10.1088/1741-2552/aace8c.
[17]MANE R, ROBINSON N, VINOD A P, et al. A multi-view CNN with novel variance layer for motor imagery brain computer interface[C]// 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). Los Alamitos, CA: IEEE Computer Society, 2020: 2950-2953. DOI: 10.1109/EMBC44109.2020.9175874.
[18]BANG J S, LEE M H, FAZLI S, et al. Spatio-spectral feature representation for motor imagery classification using convolutional neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(7): 3038-3049. DOI: 10.1109/TNNLS.2020.3048385.
[19]SAUSENG P, KLIMESCH W. What does phase information of oscillatory brain activity tell us about cognitive processes?[J]. Neuroscience & Biobehavioral Reviews, 2008, 32(5): 1001-1013. DOI: 10.1016/j.neubiorev.2008.03.014.
[20]HELFRICH R F, KNEPPER H, NOLTE G, et al. Spectral fingerprints of large-scale cortical dynamics during ambiguous motion perception[J]. Human Brain Mapping, 2016, 37(11): 4099-4111. DOI: 10.1002/hbm.23298.
[21]POURALI H, OMRANPOUR H. CSP-Ph-PS: learning CSP-phase space and poincare sections based on evolutionary algorithm for EEG signals recognition[J]. Expert Systems with Applications, 2023, 211: 118621. DOI: 10.1016/j.eswa.2022.118621.
[22]LACHAUX J P, RODRIGUEZ E, MARTINERIE J, et al. Measuring phase synchrony in brain signals[J]. Human Brain Mapping, 1999, 8(4): 194-208. DOI: 10.1002/(SICI)1097-0193(1999)8:4<194::AID-HBM4>3.0.CO;2-C.
[23]CHEN J C, CUI Y G, WANG H, et al. Deep learning approach for detection of unfavorable driving state based on multiple phase synchronization between multi-channel EEG signals[J]. Information Sciences, 2024, 658: 120070. DOI: 10.1016/j.ins.2023.120070.
[24]钟文潇,安兴伟,狄洋,等.基于脑电信号的身份特征提取方法研究综述[J].生物医学工程学杂志,2021,38(6):1203-1210. DOI: 10.7507/1001-5515.202102057.
[25]XU G Y, WANG Z Y, XU T H, et al. Engagement recognition using a multi-domain feature extraction method based on correlation-based common spatial patterns[J]. Applied Sciences, 2023, 13(21): 11924. DOI: 10.3390/app132111924.
[26]FAN C C, YANG B H, LI X O, et al. Temporal-frequency-phase feature classification using 3D-convolutional neural networks for motor imagery and movement[J]. Frontiers in Neuroscience, 2023, 17: 1250991. DOI: 10.3389/fnins.2023.1250991.
[27]CHAKRABORTY B, GHOSH L, KONAR A. Designing phase-sensitive common spatial pattern filter to improve brain-computer interfacing[J]. IEEE Transactions on Bio-Medical Engineering, 2020, 67(7): 2064-2072. DOI: 10.1109/tbme.2019.2954470.
[28]HUANG S L, CAI G Q, WANG T, et al. Amplitude-phase information measurement on Riemannian manifold for motor imagery-based BCI[J]. IEEE Signal Processing Letters, 2021, 28: 1310-1314. DOI: 10.1109/LSP.2021.3087099.
[29]TRAN D, BOURDEV L, FERGUS R, et al. Learning spatiotemporal features with 3D convolutional networks[C]// 2015 IEEE International Conference on Computer Vision (ICCV). Los Alamitos, CA: IEEE Computer Society, 2015: 4489-4497. DOI: 10.1109/ICCV.2015.510.
[30]HO J, SALIMANS T, GRITSENKO A, et al. Video diffusion models[C]// Advances in Neural Information Processing Systems 35 (NeurIPS 2022). Red Hook: Curran Associates, Inc., 2022: 8633-8646.
[31]JAMIOŁKOWSKI A. Linear transformations which preserve trace and positive semidefiniteness of operators[J]. Reports on Mathematical Physics, 1972, 3(4): 275-278. DOI: 10.1016/0034-4877(72)90011-0.
[32]LEE C Y, HASEGAWA H, GAO S C. Complex-valued neural networks: a comprehensive survey[J]. IEEE/CAA Journal of Automatica Sinica, 2022, 9(8): 1406-1426. DOI: 10.1109/JAS.2022.105743.
[33]GOLIK P, DOETSCH P, NEY H. Cross-entropy vs. squared error training: a theoretical and experimental comparison[C]// Proceedings of the Interspeech 2013. Baixas: International Speech Communications Association, 2013: 1756-1760. DOI: 10.21437/Interspeech.2013-436.
[34]SHI G J, SHANECHI M M, AARABI P. On the importance of phase in human speech recognition[J]. IEEE Transactions on Audio Speech and Language Processing, 2006, 14(5): 1867-1874. DOI: 10.1109/TSA.2005.858512.
[35]ARJOVSKY M, SHAH A, BENGIO Y. Unitary evolution recurrent neural networks[C]// Proceedings of the 33rd International Conference on International Conference on Machine Learning. New York: JMLR, 2016: 1120-1128. DOI: 10.5555/3045390.3045509.
[36]TRABELSI C, BILANIUK O, ZHANG Y, et al. Deep complex networks[EB/OL]. (2018-02-25)[2024-09-01]. https://arxiv.org/abs/1705.09792. DOI: 10.48550/arXiv.1705.09792.
[37]HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Los Alamitos, CA: IEEE Computer Society, 2016: 770-778. DOI: 10.1109/CVPR.2016.90.
[38]KINGMA D P, BA J. Adam: a method for stochastic optimization[EB/OL]. (2017-01-30)[2024-09-01]. https://arxiv.org/abs/1412.6980. DOI: 10.48550/arXiv.1412.6980.
[39]TANGERMANN M, MÜLLER K R, AERTSEN A, et al. Review of the BCI competition IV[J]. Frontiers in Neuroscience, 2012, 6: 55. DOI: 10.3389/fnins.2012.00055.
[40]LEE M H, KWON O Y, KIM Y J, et al. EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy[J]. GigaScience, 2019, 8(5): giz002. DOI: 10.1093/gigascience/giz002.
[41]BLANKERTZ B, LEMM S, TREDER M, et al. Single-trial analysis and classification of ERP components: a tutorial[J]. NeuroImage, 2011, 56(2): 814-825. DOI: 10.1016/j.neuroimage.2010.06.048.
[42]JAMIESON A R, GIGER M L, DRUKKER K, et al. Exploring nonlinear feature space dimension reduction and data representation in breast CADx with Laplacian eigenmaps and t-SNE[J]. Medical Physics, 2010, 37(1): 339-351. DOI: 10.1118/1.3267037.
[43]DAVIES D L, BOULDIN D W. A cluster separation measure[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1979, PAMI-1(2): 224-227. DOI: 10.1109/TPAMI.1979.4766909.
[1] 汤亮, 陈博文, 牛一森, 马荣庚. 基于YOLOv8的雾天车辆行人实时检测方法[J]. 广西师范大学学报(自然科学版), 2025, 43(3): 72-83.
[2] 梁胤杰, 南新元, 蔡鑫, 李云鹏, 勾海光. 基于数据增广与改进YOLOv8的桥梁缺陷检测[J]. 广西师范大学学报(自然科学版), 2025, 43(3): 84-97.
[3] 王凯明, 周海燕, 郭家梁, 杨孝敬, 王刚, 钟宁. 基于统计分布熵的抑郁症脑电信号分析[J]. 广西师范大学学报(自然科学版), 2015, 33(2): 29-35.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!
版权所有 © 广西师范大学学报(自然科学版)编辑部
地址:广西桂林市三里店育才路15号 邮编:541004
电话:0773-5857325 E-mail: gxsdzkb@mailbox.gxnu.edu.cn
本系统由北京玛格泰克科技发展有限公司设计开发