ஐ.எஸ்.எஸ்.என்: 2153-0637
Roman Tankelevich
This research concerns cognitive modeling and emotion recognition based on collections of the real time human brain waves (EEG). In this talk, we present the current status of the research field as it is known to us and discuss the two-phase modeling approach under consideration in our study. The raw temporal EEG data are collected from the humans performing a set of given tasks. The data is analyzed using standard signal processing techniques plus some more specific methods, such as the correlational, wavelet and fractals analysis. The EEG electrodes are located in the areas of the scalp specified by American Electroencephalographic Society Standard (in the experimental part of our study, we use Emotiv wireless headset with 14 electrodes located according to the Standard). The collected EEG data corresponding to different cognitive states and emotional responses of the human subjects are used to feed the supervised learning models. The research relies on the Deep Learning models with different levels of depth. The task is to identify the most effective parameters of the learning procedure that would allow to reliably recognizing such emotions, considered to be basic, as anger, disgust, fear, happiness, sadness, and surprise as well as their derivatives: amusement, contempt, contentment, embarrassment, excitement, guilt, pride in achievement, relief, satisfaction, sensory pleasure, and shame. The EEG potentials reflect, indirectly, the spatially distributed brain neuronal signals induced by human cognitive and emotional activities. We suggest that the information about such localized (voxel based) data will be a better input for the Deep Learning model than the surface cranial information since it should provide the larger dimensionality of the supervised learning. Thus, the two-phase model is introduced: The first phase concerns the solution of the large scale inverse problem requiring the Big Data representation while at the second phase the Deep Learning model is used with the output of the first phase as its input. Assuming a model with a given number of the cranial measurements and a number of voxels obtained by subdividing, uniformly, the brain volume, the task is to find the density of the localized currents as a solution of the inverse problem. The method called LORETA is considered to be useful for solving the inverse problem. We work towards some specific applications of human emotion recognition such as Brain Computer Interface (BCI), and evaluation of educational materials and presentations. The methodology and the results obtained so far will be presented.