2017 Poster Sessions : Tracking and Retargeting Facial Performances with a Data-Augmented Anatomical Prior

Student Name : Michael Bao
Advisor : Ron Fedkiw
Research Areas: Graphics/HCI
Blendshape rigs are crucial to state-of-the-art facial animation systems at high-end visual effect studios. These models are used as geometric priors for tracking facial performances and are critical for retargeting a performance from an actor/actress to a digital character. However, the limited, linear, and non-physical nature of the blendshape system causes many inaccuracies which results in an ``uncanny valley''-esque performance. We instead propose the use of an anatomically-based model. The anatomical model can used to target the facial performance given by the actor/actress; however, unlike the blendshapes, the non-linear anatomical model built on simulation will preserve physical properties such as volume preservation which results in superior results. The model can furthermore be augmented by captured data to better capture the subtle nuances of the face. The captured facial performances on the anatomical model can then be easily transferred to any digital character with a corresponding anatomical model in a semantically meaningful manner.

Michael Bao is a third year Ph.D. candidate advised by Professor Ronald Fedkiw. His current research interests are in combining computer vision, machine learning, and computer graphics for use in high-end visual effects. Specifically, he wishes to apply these techniques towards anatomically tracking and retargeting facial performances. He is currently also an Associate R&D Engineer at Industrial Light & Magic.