Animatable face models from uncalibrated input pictures in 2009 10th International Conference on Telecommunications, ConTEL 2009, vol , issue , pp 177-184
2009 (English)In: 2009 10th International Conference on Telecommunications, ConTEL 2009, 2009, 177-184 p.Conference paper (Refereed)
In networked virtual environments, videoconferences or chatting over the Internet users are often graphically represented by virtual characters. Modeling realistic virtual heads of users suitable for animation implies a heavy artistic effort and resource cost. This paper introduces a system that generates a 3D model of a real human head with a little human intervention. The system receives five input orthogonal photographs of the human head and a generic template 3D model. It requires manual annotation of 94 feature points on each photograph. The same set of feature points must be selected on the template model in a preprocessing step that is done only once. Computing process consists of two phases: a morphing and a coloring phase. In the mor hing phase the template model is morphed in two steps using a Radial Basis Function (RBF) to take a shape similar to the shape of the real human head. In the coloring phase the deformed model is colored using the input photographs based on a cubemap projection, which leads to a realistic appearance of the model while allowing for a real-time performance. We show the use of the output model by automatically copying facial motions from the template model to the deformed model, while preserving the compliance of the motion to the MPEG-4 FBA standard.
Place, publisher, year, edition, pages
2009. 177-184 p.
Engineering and Technology
IdentifiersURN: urn:nbn:se:liu:diva-57074OAI: oai:DiVA.org:liu-57074DiVA: diva2:323587