Omitir los comandos de cinta
Saltar al contenido principal
Inicio de sesión
Universidad EAFIT
Carrera 49 # 7 sur -50 Medellín Antioquia Colombia
Carrera 12 # 96-23, oficina 304 Bogotá Cundinamarca Colombia
(57)(4) 2619500 contacto@eafit.edu.co

Análisis de Señales Emocionales

​​​​​It is difficult to establish a precise definition of what is an emotion, but there is an emerging definition, which establishes that emotions are mental states.​

It is difficult to establish a precise definition of what is an emotion, but there is an emerging definition, which establishes that emotions are mental states.[4] These mental states are brain response to internal and external stimuli such as listen an orchestra, interaction with other human beings, or psychological and neuroendocrine sudden changes, like specifically to remember an event or an image stored in our mind.[52] These emotional states are manifested like other mental experiences, as the result of nerve activity in the brain[4] so it can be inferred that emotional states can be seen as a pattern of EEG signals.

Therefore a joint effort was undertook with the Medical Technology Laboratory (GATEME) from Universidad Nacional de San Juan, Argentina and with Psychology, Education and Culture Research Groupfrom Institución Universitaria Politécnico Grancolombiano, Colombia.These efforts will focus on studying pattern recognition, concerning emotional states within electroencephalographic signals.

We also know how to work with voice​

An approach to emotion recognition

Firtstly, a study was conducted by Psychology, Education and Culture Research Group  of evoke emotions of mother-child dyads. The study was conducted with 8 subjects, 4 women (mothers), and 4 children (3 males and 1 female) with a mean age of 22 months. To perform the experiment the following protocol was used: First, each mother was asked to make a recording in a room, the happiest moment of her life for the stimulus of happiness, and the saddest moment of her life to the stimulus sadness, then every dyad was placed face to face; the mother had headphones and listen in each case (happy or sad) the story that her previously recorded, evoking the feeling of happiness or sadness, as the case, creating an evocation of emotions on her child who was staring at her. The state of neutrality was recorded before each session of evocation of emotions.


​​​​​

It was designed a simple graphical interface that allows  to replicate the analysis with different signals.  ​


A Wavelet Analysis

Then with the help of System Engineering Research Group, ARKADIUS from Universidad de Medellin, we went a little beyond studying the behavior of these signals by decomposing them using wavelet transform, for the detection of the emotional states: Happiness and Sadness, reducing the size of the data to analyze. 

After evaluating the good performance of analysis by wavelet transform, we add to our study the emotional state of neutrality, but this time using classic classifiers as QDA, KNN and RFC. With an average classification of 87% for the three emotional states. << Hypervinculo  Pendiente >>​​


More emotions and more channels

Seeking for evaluate the performance of our techniques for detecting emotional states more work with the HCI Tagging  database, that includes EEG information with 34 Channels and 9 emotional states.   We delimit the database to simulate the behavior of the equipment Emotiv EPOC and evaluate our algorithms for these problem, obtained and average classification of 88% for the nine emotiontal states.

Our next goal is to create our own database, for which it was designed EmotivUI.


Brain + Voice + Face = Emotions

The understanding of psychological phenomenon as emotions is a particular need for psychologists for the recognition of a pathology and to prescribe a treatment for a patient. However, to recognize emotions in people can be a real challenge when they are not in optimal conditions to communicate how they feel, as when we are dealing with children who do not talk, or elder people or with adults that had suffered some disability. Towards this problem, mathematics and computational sciences have proposed different techniques for emotion recognition from human physiological bearings as: voice, electroencephalography, facial expression, temperature and heart rate. The Mathematical Modelling research group of EAFIT has developed algorithms for emotion recognition using each of this psychological phenomenons and the main goal of this research is to carry out a deep study in the research background of GRIMMAT and unimodal approaches (voice speech, temperature, ect.) they implemented the last years in order to develop a multimodal methodology that integrates the single models, thus improving accuracy in the recognition task.

Fusion.png



Academic Products

A. Gómez, G. Mejía y O. Quintero (2016). Reconocimiento de emociones utilizando la tranformada wavelet estacionaria en señales EEG multicanal. VII Congreso Latinoame- ricano de Ingeniería Biomédica (CLAIB).

A. Gómez, G. Mejía y O. Quintero (2016). Emotion recognition in single-channel EEG signals using stationary wavelet transform. VII Congreso Latinoamericano de Ingeniería Biomédica (CLAIB).

Susana Mejía M., Olga Lucía Quintero M., Jaime Castro M (2016). Dynamic Analysis of Emotions through Arti cial Intelligence. En Avances en Psicología Latinoamericana, 1794-4724.

D. Campo, O. L. Quintero, M. Bastidas (2016). Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech. Journal of Physics: Conference Series.

D. Sierra-Sosa, M. Bastidas, D. Ortiz P., and O. L. Quintero (2016). Double Fourier Analysis for Emotion Identi cation in Voiced Speech. Journal of Physics: Conference Series.

A. Gómez, L. Quintero, N. López and J. Castro (2016). An approach to emotion re- cognition in single-channel EEG signals: a mother child interaction. Journal of Physics: Conference Series.

D. Ortiz, L. Villa, C. Salazar, O. Quintero (2016). A simple but e cient voice acti- vity detection algorithm through Hilbert transform and dynamic threshold for speech pathologies. Journal of Physics: Conference Series. 

 

 
 

​​​​​

Última modificación: 20/09/2017 16:24