Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006)
M.-P. Cani, J. O’Brien (Editors)
Sketching Articulation and Pose for Facial Animation
Edwin Chang
1
and Odest Chadwicke Jenkins
1
1
Brown University
Abstract
We present a methodology for articulating and posing meshes, in particular facial meshes, through a 2D sketching
interface. Our method establishes an interface between 3D meshes and 2D sketching with the inference of reference
and target curves. Reference curves allow for user selection of features on a mesh and their manipulation to match
a target curve. Our articulation system uses these curves to specify the deformations of a character rig, forming
a coordinate space of mesh poses. Given such a coordinate space, our posing system uses reference and target
curves to find the optimal pose of the mesh with respect to the sketch input. We present results demonstrating
the efficacy of our method for mesh articulation, mesh posing with articulations generated in both Maya and our
sketch-based system, and mesh animation using human features from video. Through our method, we aim to both
provide novice-accessible articulation and posing mesh interfaces and rapid prototyping of complex deformations
for more experienced users.
Categories and Subject Descriptors (according to ACM CCS): I.3.6 [Computer Graphics]: Methodology and
Techniques-Interaction Techniques I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling-
Geometric Transformations
1. Introduction
Articulating and posing are both tasks inherent to the an-
imation of 3D meshes. Defining the articulation (or rig-
ging) of a mesh traditionally involves specification of sev-
eral deformation variables over the range of desired motion.
To achieve satisfactory results, a user may need to manually
specify deformation settings for hundreds of vertices. Fur-
thermore, an infinite number of plausible deformations can
exist for a given mesh that range from the realistic flexing
and extending of underlying muscle to cartoon squash and
stretch motion. Consequently, articulation is often a tedious
and complex process requring substantial technical as well
as artistic skill. This problem is compounded when defining
the articulation of a facial mesh, where motion is quickly
discernable as natural or unnatural to a human viewer.
Once articulation is performed, an animator creates ani-
mations by specifying poses of the mesh in the articulation
space. To specify a pose efficiently, an animator is often pro-
vided with a control rig comprised of widgets and sliders
that provide a puppet-like control of the mesh deformation.
Unfortunately, users face a considerable learning curve to
understand and utilize such control rigs, often requiring as
much time as creating the control rig itself.
To address the lack of accessiblity in current rigging sys-
tems, we aim to leverage the familiarity of 2D sketching as
an interface for 3D mesh animation. While current articu-
lation and posing interfaces provide detailed control, such
interfaces lack intuition and accessiblity for novices and tra-
ditional animators trained with pencil and paper. A sketching
interface, however, provides a familiar interface while still
providing a high level of control to users. It can be partic-
ularly helpful to a novice who lacks a strong understanding
of facial movement but is comfortable working with simple
line drawings of a face. For traditional animators, sketching
provides a direct correlation between hand drawn and 3D
animation.
In this paper, we present a 2D sketching interface to fa-
cilitate procedures for articulating a single mesh and posing
an articulated mesh. Our method focuses on the inference of
reference and target curves on the mesh from user sketch in-
put. In our posing procedure, the user first draws a sketch to
place a reference curve on the mesh. The user then draws a
sketch to identify a target curve, which specifies the desired
c
The Eurographics Association 2006.