Open Access Te Herenga Waka-Victoria University of Wellington
Browse

Expressive Speech-driven Facial Animation with controllable emotions

conference contribution
posted on 2023-02-12, 20:26 authored by Yutong Chen, Junhong ZhaoJunhong Zhao, Wei-Qiang Zhang
It is in high demand to generate facial animation with high realism, but it remains a challenging task. Existing approaches of speech-driven facial animation can produce satisfactory mouth movement and lip synchronization, but show weakness in dramatic emotional expressions and flexibility in emotion control. This paper presents a novel deep learning-based approach for expressive facial animation generation from speech that can exhibit wide-spectrum facial expressions with controllable emotion type and intensity. We propose an emotion controller module to learn the relationship between the emotion variations (e.g., types and intensity) and the corresponding facial expression parameters. It enables emotion-controllable facial animation, where the target expression can be continuously adjusted as desired. The qualitative and quantitative evaluations show that the animation generated by our method is rich in facial emotional expressiveness while retaining accurate lip movement, outperforming other state-of-the-art methods.

History

Preferred citation

Chen, Y., Zhao, J. & Zhang, W. -Q. (n.d.). Expressive Speech-driven Facial Animation with controllable emotions.

Contribution type

Published Paper

Publication status

Published online

Usage metrics

    Conference papers

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC