Cookies?
Library Header Image
LSE Research Online LSE Library Services

Robust facial reactions generation: an emotion-aware framework with modality compensation

Hu, Guanyu, Wei, Jie, Song, Siyang, Kollias, Dimitrios, Yang, Xinyu, Sun, Zhonglin and Kaloidas, Odysseus (2024) Robust facial reactions generation: an emotion-aware framework with modality compensation. In: Proceedings - 2024 IEEE International Joint Conference on Biometrics, IJCB 2024. Proceedings of the IEEE International Joint Conference on Biometrics, IJCB 2024. IEEE. ISBN 9798350364132

Full text not available from this repository.

Identification Number: 10.1109/IJCB62174.2024.10744499

Abstract

The objective of the Multiple Appropriate Facial Reaction Generation (MAFRG) task is to produce contextually appropriate and diverse listener facial behavioural responses based on the multimodal behavioural data of the conversational partner (i.e., the speaker). Current methodologies typically assume continuous availability of speech and facial modality data, neglecting real-world scenarios where these data may be intermittently unavailable, which often results in model failures. Furthermore, despite utilising advanced deep learning models to extract information from the speaker's multimodal inputs, these models fail to adequately leverage the speaker's emotional context, which is vital for eliciting appropriate facial reactions from human listeners. To address these limitations, we propose an Emotion-aware Modality Compensatory (EMC) framework. This versatile solution can be seamlessly integrated into existing models, thereby preserving their advantages while significantly enhancing performance and robustness in scenarios with missing modalities. Our framework ensures resilience when faced with missing modality data through the Compensatory Modality Alignment (CMA) module. It also generates more appropriate emotion-aware reactions via the Emotion-aware Attention (EA) module, which incorporates the speaker's emotional information throughout the entire encoding and decoding process. Experimental results demonstrate that our framework improves the appropriateness metric FRCorr by an average of 57.2% compared to the original model structure. In scenarios where speech modality data is missing, the performance of appropriate generation shows an improvement, and when facial data is missing, it only exhibits minimal degradation.

Item Type: Book Section
Additional Information: © 2024 IEEE.
Divisions: LSE
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
B Philosophy. Psychology. Religion > BF Psychology
Date Deposited: 19 Dec 2024 09:45
Last Modified: 19 Dec 2024 09:45
URI: http://eprints.lse.ac.uk/id/eprint/126506

Actions (login required)

View Item View Item