The Multi-modal Multiple Appropriate Facial Reaction Generation Challeng...
Recognising continuous emotions and action unit (AU) intensities from fa...
Automatically recognising apparent emotions from face and voice is hard,...
In this paper, we present our submission to 3rd Affective Behavior Analy...
Video-based automatic depression analysis provides a fast, objective and...
This approach builds on two following findings in cognitive science: (i)...
Temporal context is key to the recognition of expressions of emotion.
Ex...
This paper addresses a major flaw of the cycle consistency loss when use...
Action Units (AUs) are geometrically-based atomic facial muscle movement...
The EmoPain 2020 Challenge is the first international competition aimed ...
The Audio/Visual Emotion Challenge and Workshop (AVEC 2019) "State-of-Mi...
Facial actions are spatio-temporal signals by nature, and therefore thei...
Generative Adversarial Networks have shown impressive results for the ta...
This paper proposes a supervised learning approach to jointly perform fa...
The performance of speaker-related systems usually degrades heavily in
p...
Automatic continuous time, continuous value assessment of a patient's pa...
Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Diso...
Linear regression is a fundamental building block in many face detection...
This paper proposes a CNN cascade for semantic part segmentation guided ...
This paper introduces a novel real-time algorithm for facial landmark
tr...