한빛사논문
Musa Mahmood a,b, Noah Kim b,c, Muhammad Mahmood b, Hojoong Kim a,b, Hyeonseok Kim a,b, Nathan Rodeheaver a,b, Mingyu Sang d, Ki Jun Yu d, Woon-Hong Yeo a,b,e,f,*
a George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA b IEN Center for Human-Centric Interfaces and Engineering at the Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA, 30332, USA c School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA d School of Electrical and Electronic Engineering, Yonsei University, Seoul, 03722, Republic of Korea e Wallace H. Coulter Department of Biomedical Engineering, Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA, 30332, USA f Neural Engineering Center, Institute for Materials, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA, 30332, USA
*Corresponding author.
Abstract
Noninvasive, wearable brain-computer interfaces (BCI) find limited use due to their obtrusive nature and low information. Currently available portable BCI systems are limited by device rigidity, bulky form factors, and gel-based skin-contact electrodes – and therefore more prone to noise and motion artifacts. Here, we introduce virtual reality (VR)-enabled split-eye asynchronous stimulus (SEAS) allowing a target to present different stimuli to either eye. This results in unique asynchronous stimulus patterns measurable with as few as four EEG electrodes, as demonstrated with improved wireless soft electronics for portable BCI. This VR-embedded SEAS paradigm demonstrates potential for improved throughput with a greater number of unique stimuli. A wearable soft platform featuring dry needle electrodes and shielded stretchable interconnects enables high throughput decoding of steady-state visually evoked potentials (SSVEP) for a text spelling interface. A combination of skin-conformal electrodes and soft materials offers high-quality recordings of SSVEP with minimal motion artifacts, validated by comparing the performance with a conventional wearable system. A deep-learning algorithm provides real-time classification, with an accuracy of 78.93% for 0.8 s and 91.73% for 2 s with 33 classes from nine human subjects, allowing for a successful demonstration of VR text spelling and navigation of a real-world environment. With as few as only four data recording channels, the system demonstrates a highly competitive information transfer rate (243.6 bit/min). Collectively, the VR-enabled soft system offers unique advantages in wireless, real-time monitoring of brain signals for portable BCI, neurological rehabilitation, and disease diagnosis.
논문정보
관련 링크
연구자 키워드
관련분야 연구자보기
소속기관 논문보기
관련분야 논문보기
해당논문 저자보기