An expressive embodied conversational agent system, in particular, a model of multimodal behaviors that includes dynamism and complex facial expressions
Many real-time interactive systems feature virtual anthropomorphic characters in order to simulate conversing groups and add plausibility and believability to the simulated environments.
They are called embodied conversational agents (ECAs), can communicate with virtual or real interlocutors using verbal and non-verbal means.
Believable nonverbal behaviors for ECAs can create a more immersive experience for users and improve the effectiveness of communication. The models usually cover six prototypical expressions of the emotions: anger, disgust, fear, happiness, sadness and surprise. However, these basic emotions can lead to caricature behaviors of the ECAs.
The solution relies on an expressive embodied conversational agent system, in particular, a model of multimodal behaviors that includes dynamism and complex facial expressions.
It aims at generating nonverbal behaviors for ECAs (e.g. gestures, facial expressions, proxemics), depending on the interpersonal attitudes that they want to express within a group while talking.
By applying different methodologies, based on corpus analysis, user-centered, or motion capture, the agent’s palette of multimodal behaviors has been enriched with social attitudes, multimodal expressive skills, active listening skills.