Open Access Te Herenga Waka-Victoria University of Wellington
Browse

An Emotion-based Facial Editing Method for Animators

Download (35.8 MB)
thesis
posted on 2025-09-23, 15:16 authored by Sihang Chen
<p><strong>Facial animation plays a crucial role in conveying emotion in digital characters, supporting narrative engagement across film, animation, games, and virtual media. However, current animation pipelines offer limited control over emotional level once facial motion has been generated, especially when such motion is tightly coupled with specific constraints such as dialogue-related lip synchronisation. This project proposes a novel approach to facial animation editing by manipulating expression at the emotion level, enabling a more structured and semantically informed approach to analysing and modifying facial animations.</strong></p><p>At the core of our approach are two key technical contributions. The first is a vision-language model (VLM)-based facial emotion annotation framework, which provides rich, context-sensitive emotional labelling of facial expressions. Traditional approaches to emotion classification often rely on limited categories (e.g., Ekman’s six basic emotions), which can be overly simplistic for the nuanced and dynamic range of affective states found in expressive performances. By leveraging the power of VLMs, the framework interprets facial expressions not only based on the facial image itself but through cues from the image that reflect narrative and situational context. This leads to a more contextual annotation that better aligns with intents and perceptions.</p><p>The second contribution is a method for emotion-content disentanglement in facial motion data. In many production situations, facial animation must adhere to strict constraints — for instance, lip synchronisation with recorded dialogue, or fidelity to pre-captured performance. Such constraints frequently inhibit further adjustments to emotional style, as the result can introduce artefacts or disrupt temporal alignment. By decoupling emotional features from content features, our method allows emotional editing without disrupting original facial animation content. This disentanglement is achieved via a learning-based codec framework that maps facial motion into a latent space with independent emotion and content dimensions.</p><p>The resulting workflow enables animators to explore a wide array of emotional variations. More broadly, it bridges the traditionally separate domains of affective computing and animation control. By embedding emotional understanding into the animation editing process, it has implications not only for artistic production but also for areas such as virtual agents and interactive emotional AI.</p>

History

Copyright Date

2025-09-24

Date of Award

2025-09-24

Publisher

Te Herenga Waka—Victoria University of Wellington

Rights License

Author Retains Copyright

Degree Discipline

Design Innovation

Degree Grantor

Te Herenga Waka—Victoria University of Wellington

Degree Level

Masters

Degree Name

Master of Design Innovation

ANZSRC Type Of Activity code

3 Applied research

Victoria University of Wellington Item Type

Awarded Research Masters Thesis

Language

en_NZ

Victoria University of Wellington School

School of Design Innovation

Advisors

Romond, Kevin