TY - GEN
T1 - A Temporal Approach to Facial Emotion Expression Recognition
AU - Asaju, Christine
AU - Vadapalli, Hima
N1 - Publisher Copyright:
© 2022, Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Systems embedded with facial emotion expression recognition models enable the application of emotion-related knowledge to improve human and computer interaction and in doing so, users have a satisfying experience. Facial expressions exhibited by individuals are mostly used as non-verbal cues of communication. It is envisaged that accurate and real-time estimation of expressions and/or emotional changes will improve existing online platforms. However, further mapping of estimated expressions to emotions is highly useful in many applications such as sentiment analysis, market analysis, student comprehension among others. Feedback based on estimated emotions plays a crucial role in improving the usability of such models. However, there have been no or limited feedback mechanisms incorporated into these models. The proposed work, therefore, investigates the use of deep learning to identify and estimate emotional changes in human faces and further analysis of estimated emotions to provide feedback. The methodology involves a temporal approach including a VGG-19 pre-trained network for feature extraction, a BiLSTM architecture for facial emotion expression recognition, and mapping criteria to map estimated expressions and the resultant emotion (positive, negative, neutral). The CNN-BiLSTM model achieved an accuracy of 91% on a test set consisting of seven basic emotions of anger, disgust, fear, happy, surprise, sadness and neutral from the Denver Intensity of Spontaneous Facial Action (DISFA) data. The data set for affective States in E-Environment(DAiSEE) labeled with boredom, frustration, confusion, and engagement was used to further test the proposed model to estimate the seven basic expressions and re-evaluate the mapping model used for mapping expressions to emotions.
AB - Systems embedded with facial emotion expression recognition models enable the application of emotion-related knowledge to improve human and computer interaction and in doing so, users have a satisfying experience. Facial expressions exhibited by individuals are mostly used as non-verbal cues of communication. It is envisaged that accurate and real-time estimation of expressions and/or emotional changes will improve existing online platforms. However, further mapping of estimated expressions to emotions is highly useful in many applications such as sentiment analysis, market analysis, student comprehension among others. Feedback based on estimated emotions plays a crucial role in improving the usability of such models. However, there have been no or limited feedback mechanisms incorporated into these models. The proposed work, therefore, investigates the use of deep learning to identify and estimate emotional changes in human faces and further analysis of estimated emotions to provide feedback. The methodology involves a temporal approach including a VGG-19 pre-trained network for feature extraction, a BiLSTM architecture for facial emotion expression recognition, and mapping criteria to map estimated expressions and the resultant emotion (positive, negative, neutral). The CNN-BiLSTM model achieved an accuracy of 91% on a test set consisting of seven basic emotions of anger, disgust, fear, happy, surprise, sadness and neutral from the Denver Intensity of Spontaneous Facial Action (DISFA) data. The data set for affective States in E-Environment(DAiSEE) labeled with boredom, frustration, confusion, and engagement was used to further test the proposed model to estimate the seven basic expressions and re-evaluate the mapping model used for mapping expressions to emotions.
KW - Deep learning
KW - Emotion estimation
KW - Expression to emotion mapping
KW - Facial emotion expression recognition
UR - http://www.scopus.com/inward/record.url?scp=85125279938&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-95070-5_18
DO - 10.1007/978-3-030-95070-5_18
M3 - Conference contribution
AN - SCOPUS:85125279938
SN - 9783030950699
T3 - Communications in Computer and Information Science
SP - 274
EP - 286
BT - Artificial Intelligence Research - 2nd Southern African Conference, SACAIR 2021, Proceedings
A2 - Jembere, Edgar
A2 - Gerber, Aurona J.
A2 - Viriri, Serestina
A2 - Pillay, Anban
PB - Springer Science and Business Media Deutschland GmbH
T2 - 2nd Southern African Conference on Artificial Intelligence Research, SACAIR 2021
Y2 - 6 December 2021 through 10 December 2021
ER -