Abstract
EEG-based brain–computer interface systems have demonstrated impressive capability in the context of the broad use of deep learning. However, serious security flaws have been shown by their vulnerability to adversarial sample attacks. It is not possible to directly apply traditional attack strategies from the image domain to EEG signals because of their unique visualization and dynamic features. A unique multi-target smooth universal adversarial perturbation (MS-UAP) approach designed specifically for EEG data is presented in this paper to address this difficulty. We define a loss function and optimize parameters based on UAP for both target and nontarget class samples, respectively, to address the temporal nature of the EEG signal and the attack’s flexibility. Additionally, this research generates a smoothed adversarial sample by convolution operations, yielding a more disorienting effect, to avoid the introduction of physiologically untrustworthy square wave distortions caused by typical visual attack approaches. Three public EEG datasets were used for extensive testing and assessments, and the results show that MS-UAP successfully targets convolutional neural network (CNN) classifiers and launches simultaneous attacks on several classes, exhibiting strong model transferability. To ensure accurate attacks that go undetected, MS-UAP creates a single disturbance that is targeted at target classes and has the least negative effect on other classes. In order to facilitate effective real-time attacks, MS-UAP, an offline-generated universal perturbation template, can be smoothly incorporated into EEG data. The study is significant since it emphasizes how important it is to improve BCI system security and promote the creation of stronger defenses.
This paper was recommended by Regional Editor Takuro Sato.