Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (613)

Search Parameters:
Keywords = brain-computer interface (BCI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 440 KiB  
Systematic Review
Systematic Review of EEG-Based Imagined Speech Classification Methods
by Salwa Alzahrani, Haneen Banjar and Rsha Mirza
Sensors 2024, 24(24), 8168; https://doi.org/10.3390/s24248168 - 21 Dec 2024
Viewed by 197
Abstract
This systematic review examines EEG-based imagined speech classification, emphasizing directional words essential for development in the brain–computer interface (BCI). This study employed a structured methodology to analyze approaches using public datasets, ensuring systematic evaluation and validation of results. This review highlights the feature [...] Read more.
This systematic review examines EEG-based imagined speech classification, emphasizing directional words essential for development in the brain–computer interface (BCI). This study employed a structured methodology to analyze approaches using public datasets, ensuring systematic evaluation and validation of results. This review highlights the feature extraction techniques that are pivotal to classification performance. These include deep learning, adaptive optimization, and frequency-specific decomposition, which enhance accuracy and robustness. Classification methods were explored by comparing traditional machine learning with deep learning and emphasizing the role of brain lateralization in imagined speech for effective recognition and classification. This study discusses the challenges of generalizability and scalability in imagined speech recognition, focusing on subject-independent approaches and multiclass scalability. Performance benchmarking across various datasets and methodologies revealed varied classification accuracies, reflecting the complexity and variability of EEG signals. This review concludes that challenges remain despite progress, particularly in classifying directional words. Future research directions include improved signal processing techniques, advanced neural network architectures, and more personalized, adaptive BCI systems. This review is critical for future efforts to develop practical communication tools for individuals with speech and motor impairments using EEG-based BCIs. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

30 pages, 4162 KiB  
Article
Enhancing Deep-Learning Classification for Remote Motor Imagery Rehabilitation Using Multi-Subject Transfer Learning in IoT Environment
by Joharah Khabti, Saad AlAhmadi and Adel Soudani
Sensors 2024, 24(24), 8127; https://doi.org/10.3390/s24248127 - 19 Dec 2024
Viewed by 400
Abstract
One of the most promising applications for electroencephalogram (EEG)-based brain–computer interfaces (BCIs) is motor rehabilitation through motor imagery (MI) tasks. However, current MI training requires physical attendance, while remote MI training can be applied anywhere, facilitating flexible rehabilitation. Providing remote MI training raises [...] Read more.
One of the most promising applications for electroencephalogram (EEG)-based brain–computer interfaces (BCIs) is motor rehabilitation through motor imagery (MI) tasks. However, current MI training requires physical attendance, while remote MI training can be applied anywhere, facilitating flexible rehabilitation. Providing remote MI training raises challenges to ensuring an accurate recognition of MI tasks by healthcare providers, in addition to managing computation and communication costs. The MI tasks are recognized through EEG signal processing and classification, which can drain sensor energy due to the complexity of the data and the presence of redundant information, often influenced by subject-dependent factors. To address these challenges, we propose in this paper a multi-subject transfer-learning approach for an efficient MI training framework in remote rehabilitation within an IoT environment. For efficient implementation, we propose an IoT architecture that includes cloud/edge computing as a solution to enhance the system’s efficiency and reduce the use of network resources. Furthermore, deep-learning classification with and without channel selection is applied in the cloud, while multi-subject transfer-learning classification is utilized at the edge node. Various transfer-learning strategies, including different epochs, freezing layers, and data divisions, were employed to improve accuracy and efficiency. To validate this framework, we used the BCI IV 2a dataset, focusing on subjects 7, 8, and 9 as targets. The results demonstrated that our approach significantly enhanced the average accuracy in both multi-subject and single-subject transfer-learning classification. In three-subject transfer-learning classification, the FCNNA model achieved up to 79.77% accuracy without channel selection and 76.90% with channel selection. For two-subject and single-subject transfer learning, the application of transfer learning improved the average accuracy by up to 6.55% and 12.19%, respectively, compared to classification without transfer learning. This framework offers a promising solution for remote MI rehabilitation, providing both accurate task recognition and efficient resource usage. Full article
Show Figures

Figure 1

20 pages, 6888 KiB  
Article
The Effect of Processing Techniques on the Classification Accuracy of Brain-Computer Interface Systems
by András Adolf, Csaba Márton Köllőd, Gergely Márton, Ward Fadel and István Ulbert
Brain Sci. 2024, 14(12), 1272; https://doi.org/10.3390/brainsci14121272 - 18 Dec 2024
Viewed by 390
Abstract
Background/Objectives: Accurately classifying Electroencephalography (EEG) signals is essential for the effective operation of Brain-Computer Interfaces (BCI), which is needed for reliable neurorehabilitation applications. However, many factors in the processing pipeline can influence classification performance. The objective of this study is to assess [...] Read more.
Background/Objectives: Accurately classifying Electroencephalography (EEG) signals is essential for the effective operation of Brain-Computer Interfaces (BCI), which is needed for reliable neurorehabilitation applications. However, many factors in the processing pipeline can influence classification performance. The objective of this study is to assess the effects of different processing steps on classification accuracy in EEG-based BCI systems. Methods: This study explores the impact of various processing techniques and stages, including the FASTER algorithm for artifact rejection (AR), frequency filtering, transfer learning, and cropped training. The Physionet dataset, consisting of four motor imagery classes, was used as input due to its relatively large number of subjects. The raw EEG was tested with EEGNet and Shallow ConvNet. To examine the impact of adding a spatial dimension to the input data, we also used the Multi-branch Conv3D Net and developed two new models, Conv2D Net and Conv3D Net. Results: Our analysis showed that classification accuracy can be affected by many factors at every stage. Applying the AR method, for instance, can either enhance or degrade classification performance, depending on the subject and the specific network architecture. Transfer learning was effective in improving the performance of all networks for both raw and artifact-rejected data. However, the improvement in classification accuracy for artifact-rejected data was less pronounced compared to unfiltered data, resulting in reduced precision. For instance, the best classifier achieved 46.1% accuracy on unfiltered data, which increased to 63.5% with transfer learning. In the filtered case, accuracy rose from 45.5% to only 55.9% when transfer learning was applied. An unexpected outcome regarding frequency filtering was observed: networks demonstrated better classification performance when focusing on lower-frequency components. Higher frequency ranges were more discriminative for EEGNet and Shallow ConvNet, but only when cropped training was applied. Conclusions: The findings of this study highlight the complex interaction between processing techniques and neural network performance, emphasizing the necessity for customized processing approaches tailored to specific subjects and network architectures. Full article
(This article belongs to the Special Issue The Application of EEG in Neurorehabilitation)
Show Figures

Figure 1

15 pages, 1937 KiB  
Article
Improving the Performance of Electrotactile Brain–Computer Interface Using Machine Learning Methods on Multi-Channel Features of Somatosensory Event-Related Potentials
by Marija Novičić, Olivera Djordjević, Vera Miler-Jerković, Ljubica Konstantinović and Andrej M. Savić
Sensors 2024, 24(24), 8048; https://doi.org/10.3390/s24248048 - 17 Dec 2024
Viewed by 354
Abstract
Traditional tactile brain–computer interfaces (BCIs), particularly those based on steady-state somatosensory–evoked potentials, face challenges such as lower accuracy, reduced bit rates, and the need for spatially distant stimulation points. In contrast, using transient electrical stimuli offers a promising alternative for generating tactile BCI [...] Read more.
Traditional tactile brain–computer interfaces (BCIs), particularly those based on steady-state somatosensory–evoked potentials, face challenges such as lower accuracy, reduced bit rates, and the need for spatially distant stimulation points. In contrast, using transient electrical stimuli offers a promising alternative for generating tactile BCI control signals: somatosensory event-related potentials (sERPs). This study aimed to optimize the performance of a novel electrotactile BCI by employing advanced feature extraction and machine learning techniques on sERP signals for the classification of users’ selective tactile attention. The experimental protocol involved ten healthy subjects performing a tactile attention task, with EEG signals recorded from five EEG channels over the sensory–motor cortex. We employed sequential forward selection (SFS) of features from temporal sERP waveforms of all EEG channels. We systematically tested classification performance using machine learning algorithms, including logistic regression, k-nearest neighbors, support vector machines, random forests, and artificial neural networks. We explored the effects of the number of stimuli required to obtain sERP features for classification and their influence on accuracy and information transfer rate. Our approach indicated significant improvements in classification accuracy compared to previous studies. We demonstrated that the number of stimuli for sERP generation can be reduced while increasing the information transfer rate without a statistically significant decrease in classification accuracy. In the case of the support vector machine classifier, we achieved a mean accuracy over 90% for 10 electrical stimuli, while for 6 stimuli, the accuracy decreased by less than 7%, and the information transfer rate increased by 60%. This research advances methods for tactile BCI control based on event-related potentials. This work is significant since tactile stimulation is an understudied modality for BCI control, and electrically induced sERPs are the least studied control signals in reactive BCIs. Exploring and optimizing the parameters of sERP elicitation, as well as feature extraction and classification methods, is crucial for addressing the accuracy versus speed trade-off in various assistive BCI applications where the tactile modality may have added value. Full article
Show Figures

Figure 1

14 pages, 2531 KiB  
Review
Media Representation of the Ethical Issues Pertaining to Brain–Computer Interface (BCI) Technology
by Savannah Beck, Yuliya Liberman and Veljko Dubljević
Brain Sci. 2024, 14(12), 1255; https://doi.org/10.3390/brainsci14121255 - 14 Dec 2024
Viewed by 625
Abstract
Background/Objectives: Brain–computer interfaces (BCIs) are a rapidly developing technology that captures and transmits brain signals to external sources, allowing the user control of devices such as prosthetics. BCI technology offers the potential to restore physical capabilities in the body and change how we [...] Read more.
Background/Objectives: Brain–computer interfaces (BCIs) are a rapidly developing technology that captures and transmits brain signals to external sources, allowing the user control of devices such as prosthetics. BCI technology offers the potential to restore physical capabilities in the body and change how we interact and communicate with computers and each other. While BCI technology has existed for decades, recent developments have caused the technology to generate a host of ethical issues and discussions in both academic and public circles. Given that media representation has the potential to shape public perception and policy, it is necessary to evaluate the space that these issues take in public discourse. Methods: We conducted a rapid review of media articles in English discussing ethical issues of BCI technology from 2013 to 2024 as indexed by LexisNexis. Our searches yielded 675 articles, with a final sample containing 182 articles. We assessed the themes of the articles and coded them based on the ethical issues discussed, ethical frameworks, recommendations, tone, and application of technology. Results: Our results showed a marked rise in interest in media articles over time, signaling an increased focus on this topic. The majority of articles adopted a balanced or neutral tone when discussing BCIs and focused on ethical issues regarding privacy, autonomy, and regulation. Conclusions: Current discussion of ethical issues reflects growing news coverage of companies such as Neuralink, and reveals a mounting distrust of BCI technology. The growing recognition of ethical considerations in BCI highlights the importance of ethical discourse in shaping the future of the field. Full article
(This article belongs to the Special Issue Emerging Topics in Brain-Computer Interface)
Show Figures

Figure 1

24 pages, 9053 KiB  
Article
An Ensemble Deep Learning Approach for EEG-Based Emotion Recognition Using Multi-Class CSP
by Behzad Yousefipour, Vahid Rajabpour, Hamidreza Abdoljabbari, Sobhan Sheykhivand and Sebelan Danishvar
Biomimetics 2024, 9(12), 761; https://doi.org/10.3390/biomimetics9120761 - 14 Dec 2024
Viewed by 577
Abstract
In recent years, significant advancements have been made in the field of brain–computer interfaces (BCIs), particularly in the area of emotion recognition using EEG signals. The majority of earlier research in this field has missed the spatial–temporal characteristics of EEG signals, which are [...] Read more.
In recent years, significant advancements have been made in the field of brain–computer interfaces (BCIs), particularly in the area of emotion recognition using EEG signals. The majority of earlier research in this field has missed the spatial–temporal characteristics of EEG signals, which are critical for accurate emotion recognition. In this study, a novel approach is presented for classifying emotions into three categories, positive, negative, and neutral, using a custom-collected dataset. The dataset used in this study was specifically collected for this purpose from 16 participants, comprising EEG recordings corresponding to the three emotional states induced by musical stimuli. A multi-class Common Spatial Pattern (MCCSP) technique was employed for the processing stage of the EEG signals. These processed signals were then fed into an ensemble model comprising three autoencoders with Convolutional Neural Network (CNN) layers. A classification accuracy of 99.44 ± 0.39% for the three emotional classes was achieved by the proposed method. This performance surpasses previous studies, demonstrating the effectiveness of the approach. The high accuracy indicates that the method could be a promising candidate for future BCI applications, providing a reliable means of emotion detection. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces)
Show Figures

Figure 1

19 pages, 1008 KiB  
Article
EEG-Based Mobile Robot Control Using Deep Learning and ROS Integration
by Bianca Ghinoiu, Victor Vlădăreanu, Ana-Maria Travediu, Luige Vlădăreanu, Abigail Pop, Yongfei Feng and Andreea Zamfirescu
Technologies 2024, 12(12), 261; https://doi.org/10.3390/technologies12120261 - 14 Dec 2024
Viewed by 604
Abstract
Efficient BCIs (Brain-Computer Interfaces) harnessing EEG (Electroencephalography) have shown potential in controlling mobile robots, also presenting new possibilities for assistive technologies. This study explores the integration of advanced deep learning models—ASTGCN, EEGNetv4, and a combined CNN-LSTM architecture—with ROS (Robot Operating System) to control [...] Read more.
Efficient BCIs (Brain-Computer Interfaces) harnessing EEG (Electroencephalography) have shown potential in controlling mobile robots, also presenting new possibilities for assistive technologies. This study explores the integration of advanced deep learning models—ASTGCN, EEGNetv4, and a combined CNN-LSTM architecture—with ROS (Robot Operating System) to control a two-wheeled mobile robot. The models were trained using a published EEG dataset, which includes signals from subjects performing thought-based tasks. Each model was evaluated based on its accuracy, F1-score, and latency. The CNN-LSTM architecture model exhibited the best performance on the cross-subject strategy with an accuracy of 88.5%, demonstrating significant potential for real-time applications. Integration with ROS was facilitated through a custom middleware, enabling seamless translation of neural commands into robot movements. The findings indicate that the CNN-LSTM model not only outperforms existing EEG-based systems in terms of accuracy but also underscores the practical feasibility of implementing such systems in real-world scenarios. Considering its efficacy, CNN-LSTM shows a great potential for assistive technology in the future. This research contributes to the development of a more intuitive and accessible robotic control system, potentially enhancing the quality of life for individuals with mobility impairments. Full article
(This article belongs to the Special Issue Advanced Autonomous Systems and Artificial Intelligence Stage)
Show Figures

Figure 1

25 pages, 11868 KiB  
Article
Humanity Test—EEG Data Mediated Artificial Intelligence Multi-Person Interactive System
by Fang Fang, Tanhao Gao and Jie Wu
Sensors 2024, 24(24), 7951; https://doi.org/10.3390/s24247951 - 12 Dec 2024
Viewed by 399
Abstract
Artificial intelligence (AI) systems are widely applied in various industries and everyday life, particularly in fields such as virtual assistants, healthcare, and education. However, this paper highlights that existing research has often overlooked the philosophical and media aspects. To address this, we developed [...] Read more.
Artificial intelligence (AI) systems are widely applied in various industries and everyday life, particularly in fields such as virtual assistants, healthcare, and education. However, this paper highlights that existing research has often overlooked the philosophical and media aspects. To address this, we developed an interactive system called “Human Nature Test”. In this context, “human nature” refers to emotion and consciousness, while “test” involves a critical analysis of AI technology and an exploration of the differences between humanity and technicality. Additionally, through experimental research and literature analysis, we found that the integration of electroencephalogram (EEG) data with AI systems is becoming a significant trend. The experiment involved 20 participants, with two conditions: C1 (using EEG data) and C2 (without EEG data). The results indicated a significant increase in immersion under the C1 condition, along with a more positive emotional experience. We summarized three design directions: enhancing immersion, creating emotional experiences, and expressing philosophical concepts. Based on these findings, there is potential for further developing EEG data as a medium to enrich interactive experiences, offering new insights into the fusion of technology and human emotion. Full article
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)
Show Figures

Figure 1

27 pages, 2015 KiB  
Article
Developing Innovative Feature Extraction Techniques from the Emotion Recognition Field on Motor Imagery Using Brain–Computer Interface EEG Signals
by Amr F. Mohamed and Vacius Jusas
Appl. Sci. 2024, 14(23), 11323; https://doi.org/10.3390/app142311323 - 4 Dec 2024
Viewed by 600
Abstract
Research on brain–computer interfaces (BCIs) advances the way scientists understand how the human brain functions. The BCI system, which is based on the use of electroencephalography (EEG) signals to detect motor imagery (MI) tasks, enables opportunities for various applications in stroke rehabilitation, neuroprosthetic [...] Read more.
Research on brain–computer interfaces (BCIs) advances the way scientists understand how the human brain functions. The BCI system, which is based on the use of electroencephalography (EEG) signals to detect motor imagery (MI) tasks, enables opportunities for various applications in stroke rehabilitation, neuroprosthetic devices, and communication tools. BCIs can also be used in emotion recognition (ER) research to depict the sophistication of human emotions by improving mental health monitoring, human–computer interactions, and neuromarketing. To address the low accuracy of MI-BCI, which is a key issue faced by researchers, this study employs a new approach that has been proven to have the potential to enhance motor imagery classification accuracy. The basic idea behind the approach is to apply feature extraction methods from the field of emotion recognition to the field of motor imagery. Six feature sets and four classifiers were explored using four MI classes (left and right hands, both feet, and tongue) from the BCI Competition IV 2a dataset. Statistical, wavelet analysis, Hjorth parameters, higher-order spectra, fractal dimensions (Katz, Higuchi, and Petrosian), and a five-dimensional combination of all five feature sets were implemented. GSVM, CART, LinearSVM, and SVM with polynomial kernel classifiers were considered. Our findings show that 3D fractal dimensions predominantly outperform all other feature sets, specifically during LinearSVM classification, accomplishing nearly 79.1% mean accuracy, superior to the state-of-the-art results obtained from the referenced MI paper, where CSP reached 73.7% and Riemannian methods reached 75.5%. It even performs as well as the latest TWSB method, which also reached approximately 79.1%. These outcomes emphasize that the new hybrid approach in the motor imagery/emotion recognition field improves classification accuracy when applied to motor imagery EEG signals, thus enhancing MI-BCI performance. Full article
(This article belongs to the Section Applied Neuroscience and Neural Engineering)
Show Figures

Figure 1

40 pages, 9499 KiB  
Review
Review of Multimodal Data Acquisition Approaches for Brain–Computer Interfaces
by Sayantan Ghosh, Domokos Máthé, Purushothaman Bhuvana Harishita, Pramod Sankarapillai, Anand Mohan, Raghavan Bhuvanakantham, Balázs Gulyás and Parasuraman Padmanabhan
BioMed 2024, 4(4), 548-587; https://doi.org/10.3390/biomed4040041 - 2 Dec 2024
Viewed by 663
Abstract
There have been multiple technological advancements that promise to gradually enable devices to measure and record signals with high resolution and accuracy in the domain of brain–computer interfaces (BCIs). Multimodal BCIs have been able to gain significant traction given their potential to enhance [...] Read more.
There have been multiple technological advancements that promise to gradually enable devices to measure and record signals with high resolution and accuracy in the domain of brain–computer interfaces (BCIs). Multimodal BCIs have been able to gain significant traction given their potential to enhance signal processing by integrating different recording modalities. In this review, we explore the integration of multiple neuroimaging and neurophysiological modalities, including electroencephalography (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), electrocorticography (ECoG), and single-unit activity (SUA). This multimodal approach leverages the high temporal resolution of EEG and MEG with the spatial precision of fMRI, the invasive yet precise nature of ECoG, and the single-neuron specificity provided by SUA. The paper highlights the advantages of integrating multiple modalities, such as increased accuracy and reliability, and discusses the challenges and limitations of multimodal integration. Furthermore, we explain the data acquisition approaches for each of these modalities. We also demonstrate various software programs that help in extracting, cleaning, and refining the data. We conclude this paper with a discussion on the available literature, highlighting recent advances, challenges, and future directions for each of these modalities. Full article
Show Figures

Figure 1

26 pages, 4174 KiB  
Article
Multimodal Explainability Using Class Activation Maps and Canonical Correlation for MI-EEG Deep Learning Classification
by Marcos Loaiza-Arias, Andrés Marino Álvarez-Meza, David Cárdenas-Peña, Álvaro Ángel Orozco-Gutierrez and German Castellanos-Dominguez
Appl. Sci. 2024, 14(23), 11208; https://doi.org/10.3390/app142311208 - 1 Dec 2024
Viewed by 664
Abstract
Brain–computer interfaces (BCIs) are essential in advancing medical diagnosis and treatment by providing non-invasive tools to assess neurological states. Among these, motor imagery (MI), in which patients mentally simulate motor tasks without physical movement, has proven to be an effective paradigm for diagnosing [...] Read more.
Brain–computer interfaces (BCIs) are essential in advancing medical diagnosis and treatment by providing non-invasive tools to assess neurological states. Among these, motor imagery (MI), in which patients mentally simulate motor tasks without physical movement, has proven to be an effective paradigm for diagnosing and monitoring neurological conditions. Electroencephalography (EEG) is widely used for MI data collection due to its high temporal resolution, cost-effectiveness, and portability. However, EEG signals can be noisy from a number of sources, including physiological artifacts and electromagnetic interference. They can also vary from person to person, which makes it harder to extract features and understand the signals. Additionally, this variability, influenced by genetic and cognitive factors, presents challenges for developing subject-independent solutions. To address these limitations, this paper presents a Multimodal and Explainable Deep Learning (MEDL) approach for MI-EEG classification and physiological interpretability. Our approach involves the following: (i) evaluating different deep learning (DL) models for subject-dependent MI-EEG discrimination; (ii) employing class activation mapping (CAM) to visualize relevant MI-EEG features; and (iii) utilizing a questionnaire–MI performance canonical correlation analysis (QMIP-CCA) to provide multidomain interpretability. On the GIGAScience MI dataset, experiments show that shallow neural networks are good at classifying MI-EEG data, while the CAM-based method finds spatio-frequency patterns. Moreover, the QMIP-CCA framework successfully correlates physiological data with MI-EEG performance, offering an enhanced, interpretable solution for BCIs. Full article
(This article belongs to the Special Issue Electroencephalography (EEG) in Assessment of Engagement and Workload)
Show Figures

Figure 1

24 pages, 5123 KiB  
Article
An Empirical Model-Based Algorithm for Removing Motion-Caused Artifacts in Motor Imagery EEG Data for Classification Using an Optimized CNN Model
by Rajesh Kannan Megalingam, Kariparambil Sudheesh Sankardas and Sakthiprasad Kuttankulangara Manoharan
Sensors 2024, 24(23), 7690; https://doi.org/10.3390/s24237690 - 30 Nov 2024
Viewed by 641
Abstract
Electroencephalography (EEG) is a non-invasive technique with high temporal resolution and cost-effective, portable, and easy-to-use features. Motor imagery EEG (MI-EEG) data classification is one of the key applications within brain–computer interface (BCI) systems, utilizing EEG signals from motor imagery tasks. BCI is very [...] Read more.
Electroencephalography (EEG) is a non-invasive technique with high temporal resolution and cost-effective, portable, and easy-to-use features. Motor imagery EEG (MI-EEG) data classification is one of the key applications within brain–computer interface (BCI) systems, utilizing EEG signals from motor imagery tasks. BCI is very useful for people with severe mobility issues like quadriplegics, spinal cord injury patients, stroke patients, etc., giving them the freedom to a certain extent to perform activities without the need for a caretaker, like driving a wheelchair. However, motion artifacts can significantly affect the quality of EEG recordings. The conventional EEG enhancement algorithms are effective in removing ocular and muscle artifacts for a stationary subject but not as effective when the subject is in motion, e.g., a wheelchair user. In this research study, we propose an empirical error model-based artifact removal approach for the cross-subject classification of motor imagery (MI) EEG data using a modified CNN-based deep learning algorithm, designed to assist wheelchair users with severe mobility issues. The classification method applies to real tasks with measured EEG data, focusing on accurately interpreting motor imagery signals for practical application. The empirical error model evolved from the inertial sensor-based acceleration data of the subject in motion, the weight of the wheelchair, the weight of the subject, and the surface friction of the terrain under the wheelchair. Three different wheelchairs and five different terrains, including road, brick, concrete, carpet, and marble, are used for artifact data recording. After evaluating and benchmarking the proposed CNN and empirical model, the classification accuracy achieved is 94.04% for distinguishing between four specific classes: left, right, front, and back. This accuracy demonstrates the model’s effectiveness compared to other state-of-the-art techniques. The comparative results show that the proposed approach is a potentially effective way to raise the decoding efficiency of motor imagery BCI. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

15 pages, 2022 KiB  
Article
Selective Auditory Attention Detection Using Combined Transformer and Convolutional Graph Neural Networks
by Masoud Geravanchizadeh, Amir Shaygan Asl and Sebelan Danishvar
Bioengineering 2024, 11(12), 1216; https://doi.org/10.3390/bioengineering11121216 - 30 Nov 2024
Viewed by 616
Abstract
Attention is one of many human cognitive functions that are essential in everyday life. Given our limited processing capacity, attention helps us focus only on what matters. Focusing attention on one speaker in an environment with many speakers is a critical ability of [...] Read more.
Attention is one of many human cognitive functions that are essential in everyday life. Given our limited processing capacity, attention helps us focus only on what matters. Focusing attention on one speaker in an environment with many speakers is a critical ability of the human auditory system. This paper proposes a new end-to-end method based on the combined transformer and graph convolutional neural network (TraGCNN) that can effectively detect auditory attention from electroencephalograms (EEGs). This approach eliminates the need for manual feature extraction, which is often time-consuming and subjective. Here, the first EEG signals are converted to graphs. We then extract attention information from these graphs using spatial and temporal approaches. Finally, our models are trained with these data. Our model can detect auditory attention in both the spatial and temporal domains. Here, the EEG input is first processed by transformer layers to obtain a sequential representation of EEG based on attention onsets. Then, a family of graph convolutional layers is used to find the most active electrodes using the spatial position of electrodes. Finally, the corresponding EEG features of active electrodes are fed into the graph attention layers to detect auditory attention. The Fuglsang 2020 dataset is used in the experiments to train and test the proposed and baseline systems. The new TraGCNN approach, as compared with state-of-the-art attention classification methods from the literature, yields the highest performance in terms of accuracy (80.12%) as a classification metric. Additionally, the proposed model results in higher performance than our previously graph-based model for different lengths of EEG segments. The new TraGCNN approach is advantageous because attenuation detection is achieved from EEG signals of subjects without requiring speech stimuli, as is the case with conventional auditory attention detection methods. Furthermore, examining the proposed model for different lengths of EEG segments shows that the model is faster than our previous graph-based detection method in terms of computational complexity. The findings of this study have important implications for the understanding and assessment of auditory attention, which is crucial for many applications, such as brain–computer interface (BCI) systems, speech separation, and neuro-steered hearing aid development. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

27 pages, 7119 KiB  
Article
MACNet: A Multidimensional Attention-Based Convolutional Neural Network for Lower-Limb Motor Imagery Classification
by Ling-Long Li, Guang-Zhong Cao, Yue-Peng Zhang, Wan-Chen Li and Fang Cui
Sensors 2024, 24(23), 7611; https://doi.org/10.3390/s24237611 - 28 Nov 2024
Viewed by 365
Abstract
Decoding lower-limb motor imagery (MI) is highly important in brain–computer interfaces (BCIs) and rehabilitation engineering. However, it is challenging to classify lower-limb MI from electroencephalogram (EEG) signals, because lower-limb motions (LLMs) including MI are excessively close to physiological representations in the human brain [...] Read more.
Decoding lower-limb motor imagery (MI) is highly important in brain–computer interfaces (BCIs) and rehabilitation engineering. However, it is challenging to classify lower-limb MI from electroencephalogram (EEG) signals, because lower-limb motions (LLMs) including MI are excessively close to physiological representations in the human brain and generate low-quality EEG signals. To address this challenge, this paper proposes a multidimensional attention-based convolutional neural network (CNN), termed MACNet, which is specifically designed for lower-limb MI classification. MACNet integrates a temporal refining module and an attention-enhanced convolutional module by leveraging the local and global feature representation abilities of CNNs and attention mechanisms. The temporal refining module adaptively investigates critical information from each electrode channel to refine EEG signals along the temporal dimension. The attention-enhanced convolutional module extracts temporal and spatial features while refining the feature maps across the channel and spatial dimensions. Owing to the scarcity of public datasets available for lower-limb MI, a specified lower-limb MI dataset involving four routine LLMs is built, consisting of 10 subjects over 20 sessions. Comparison experiments and ablation studies are conducted on this dataset and a public BCI Competition IV 2a EEG dataset. The experimental results show that MACNet achieves state-of-the-art performance and outperforms alternative models for the subject-specific mode. Visualization analysis reveals the excellent feature learning capabilities of MACNet and the potential relationship between lower-limb MI and brain activity. The effectiveness and generalizability of MACNet are verified. Full article
Show Figures

Figure 1

19 pages, 10741 KiB  
Article
Electroencephalography-Based Motor Imagery Classification Using Multi-Scale Feature Fusion and Adaptive Lasso
by Shimiao Chen, Nan Li, Xiangzeng Kong, Dong Huang and Tingting Zhang
Big Data Cogn. Comput. 2024, 8(12), 169; https://doi.org/10.3390/bdcc8120169 - 25 Nov 2024
Viewed by 642
Abstract
Brain–computer interfaces, where motor imagery electroencephalography (EEG) signals are transformed into control commands, offer a promising solution for enhancing the standard of living for disabled individuals. However, the performance of EEG classification has been limited in most studies due to a lack of [...] Read more.
Brain–computer interfaces, where motor imagery electroencephalography (EEG) signals are transformed into control commands, offer a promising solution for enhancing the standard of living for disabled individuals. However, the performance of EEG classification has been limited in most studies due to a lack of attention to the complementary information inherent at different temporal scales. Additionally, significant inter-subject variability in sensitivity to biological motion poses another critical challenge in achieving accurate EEG classification in a subject-dependent manner. To address these challenges, we propose a novel machine learning framework combining multi-scale feature fusion, which captures global and local spatial information from different-sized EEG segmentations, and adaptive Lasso-based feature selection, a mechanism for adaptively retaining informative subject-dependent features and discarding irrelevant ones. Experimental results on multiple public benchmark datasets revealed substantial improvements in EEG classification, achieving rates of 81.36%, 75.90%, and 68.30% for the BCIC-IV-2a, SMR-BCI, and OpenBMI datasets, respectively. These results not only surpassed existing methodologies but also underscored the effectiveness of our approach in overcoming specific challenges in EEG classification. Ablation studies further confirmed the efficacy of both the multi-scale feature analysis and adaptive selection mechanisms. This framework marks a significant advancement in the decoding of motor imagery EEG signals, positioning it for practical applications in real-world BCIs. Full article
Show Figures

Figure 1

Back to TopTop