PNN for EEG-based Emotion Recognition
Jianhai Zhang, Ming Chen, Sanqing Hu, Senior
Member, IEEE
College of Computer Science
Hangzhou Dianzi University
Hangzhou, China
jhzhang@hdu.edu.cn, cming163@163.com,
sqhu@hdu.edu.cn
Yu Cao, Senior Member, IEEE
Department of Computer Science
The University of Massachusetts Lowell
MA, USA
ycao@cs.uml.edu
Robert Kozma, Fellow, IEEE
Center for Large-Scale Intelligent Optimization and Networks
University of Memphis
Memphis, USA
rkozma@memphis.edu
Abstract—The effort to integrate emotions into human-
computer interaction (HCI) system has attracted broad
attentions. Automatic emotion recognition enables the HCI to
become more intelligent and user friendly. Although numerous
studies have been performed in this field, emotion recognition is
still an extremely challenging task, especially in real-world
practice usage. In this work, probabilistic neural network (PNN) ,
with advantage of simple, efficient, and easy to train, was
employed to recognize emotions elicited by watching music videos
from scalp EEG. The publicly available DEAP emotion database
was used to validate our algorithms. The powers of 4 frequency
bands of EEG were extracted as features. The results show that
the mean classification accuracy of PNN is 81.21% for valence(≥5
and <5) and 81.26% for arousal(≥5 and <5) across 32 subjects,
similar with the results of SVM. In addition, they demonstrate
that higher frequency bands (beta and gamma) play more
important role in emotion classification than lower ones (theta
and alpha). For the purpose of practical emotion recognition
system, we proposed a ReliefF-based channel selection algorithm
to reduce the number of used channels for convenience in
practical usage. The results show that while using PNN, the 98%
of the maximum classification accuracy can be obtained with
only 9 (for valence) and 8 (for arousal) best channels, however, 19
(for valence) and 14 (for arousal) channels are needed while using
SVM.
Keywords—Emotion Recognition; Electroencephalogram (EEG);
Probabilistic Neural Network (PNN); ReliefF; Channel Selection.
I.
I
NTRODUCTION
Emotion as a psychological and physiological phenomenon
plays an important role in our social interactions. It was not
until 1995, when Picard proposed “affective computing (AC)”
[1], that the importance of emotion in human-machine
interaction (HMI) began to attract increasing attention. The key
element in affective computing is to detect human emotions
accurately in real time using pattern recognition and machine
learning techniques.
A variety of methods for detecting human emotion have
been proposed using physical or physiological measurements
in the past few decades, such as facial expression, speech, body
gesture, respiration, skin conductance (SC), electromyogram
(EMG), etc. Although encouraging progress has been made [2-
5], these results provide only an indirect mapping of human
emotion. Recently, electroencephalogram (EEG) based
emotion recognition has received increasing attention. In
comparison with the aforementioned measurements, EEG
technique can directly detect the brain dynamics in response to
emotional states, so it is expected to provide more objective
and comprehensive information for emotion recognition.
Furthermore, with the advantages of non-invasion, low cost,
portability and high temporal resolution, it is possible to use
EEG-based emotion recognition system for real-word
applications.
A large body of researches has been performed for EEG-
based emotion recognition. Ishino and Hagiwara [6] employed
neural networks as classifier to categorize emotional states
based on EEG features, and the best average accuracy 67.7%
was reported for recognizing four emotional states. Heraz et al.
[7] classified eight emotional states with an average accuracy
82.27% using k-nearest neighbors as classifier and the
amplitudes of four EEG frequency bands as features. Wang, et
al. [8] extracted six kinds of time-domain features and
frequency-domain features to classify four emotional states and
obtained an average test accuracy of 66.51% by using SVM.
Lin, et al. [9] investigated the classification of four emotional
states elicited through listening music following a 2-D valence-
arousal emotion model. In their work, SVM and multilayer
perceptron (MLP) were used to evaluate four frequency-
domain feature types. Other results have been reported by
Schaaff and Schultz [10]. In spite of all the above progress,
EEG-based emotion recognition is still an extremely
challenging task, especially for the purpose of practical usage.
Two important aspects should be considered in solving this
problem, the first is finding a classifier which is simple,
efficient and easy to train for different people, the second one
is making the emotion recognition system convenient to use.
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary
978-1-5090-1897-0/16/$31.00 ©2016 IEEE