A Temporal Approach to Facial Emotion Expression Recognition

Christine Asaju, Hima Vadapalli

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

Systems embedded with facial emotion expression recognition models enable the application of emotion-related knowledge to improve human and computer interaction and in doing so, users have a satisfying experience. Facial expressions exhibited by individuals are mostly used as non-verbal cues of communication. It is envisaged that accurate and real-time estimation of expressions and/or emotional changes will improve existing online platforms. However, further mapping of estimated expressions to emotions is highly useful in many applications such as sentiment analysis, market analysis, student comprehension among others. Feedback based on estimated emotions plays a crucial role in improving the usability of such models. However, there have been no or limited feedback mechanisms incorporated into these models. The proposed work, therefore, investigates the use of deep learning to identify and estimate emotional changes in human faces and further analysis of estimated emotions to provide feedback. The methodology involves a temporal approach including a VGG-19 pre-trained network for feature extraction, a BiLSTM architecture for facial emotion expression recognition, and mapping criteria to map estimated expressions and the resultant emotion (positive, negative, neutral). The CNN-BiLSTM model achieved an accuracy of 91% on a test set consisting of seven basic emotions of anger, disgust, fear, happy, surprise, sadness and neutral from the Denver Intensity of Spontaneous Facial Action (DISFA) data. The data set for affective States in E-Environment(DAiSEE) labeled with boredom, frustration, confusion, and engagement was used to further test the proposed model to estimate the seven basic expressions and re-evaluate the mapping model used for mapping expressions to emotions.

Original languageEnglish
Title of host publicationArtificial Intelligence Research - 2nd Southern African Conference, SACAIR 2021, Proceedings
EditorsEdgar Jembere, Aurona J. Gerber, Serestina Viriri, Anban Pillay
PublisherSpringer Science and Business Media Deutschland GmbH
Pages274-286
Number of pages13
ISBN (Print)9783030950699
DOIs
Publication statusPublished - 2022
Externally publishedYes
Event2nd Southern African Conference on Artificial Intelligence Research, SACAIR 2021 - Virtual, Online
Duration: 6 Dec 202110 Dec 2021

Publication series

NameCommunications in Computer and Information Science
Volume1551 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference2nd Southern African Conference on Artificial Intelligence Research, SACAIR 2021
CityVirtual, Online
Period6/12/2110/12/21

Keywords

  • Deep learning
  • Emotion estimation
  • Expression to emotion mapping
  • Facial emotion expression recognition

ASJC Scopus subject areas

  • General Computer Science
  • General Mathematics

Fingerprint

Dive into the research topics of 'A Temporal Approach to Facial Emotion Expression Recognition'. Together they form a unique fingerprint.

Cite this