Dressing and hair-styling virtual characters from a sketch

Jamie Wither, Marie Paule Cani

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Sketch-based modeling and edition of free form shapes has become popular in the past few years. The user typically sketches and refines a shape from different viewpoints and zoom factors. Usually assumptions aremade on the nature of the resulting shape. For example: the surface generated should be a smooth, closed surface surrounding a volume of arbitrary topological genus. Inferring 3D from 2D is generally done by inflating the 2D contour of each shape component, guessing the depth of the shape in the third dimension or modifying it under the user's control. This chapter presents a different application of sketch-based modeling: it illustrates the case when the nature of the object to be modeled is well known (modeling a mountain, a flower, a tree; or, using the examples from this chapter: modeling a garment or a hairstyle). Knowing the nature of the model the user wants to create makes things very different: all the prior knowledge we have of the object being modeled can be expressed, and used to infer the third dimension from 2D. This enables the extraction of much more information from a single sketch, which reduces the need for specifying the desired shape from several different viewpoints. In some cases, the technique can even be seen as designing a procedural model, and measuring its shape parameters on the user's sketch. 3D is then easily inferred, but the quality of the reconstruction depends on how well the sketch fits the potential outputs of the procedural model. We illustrate the strength of these dedicated sketch-based interfaces by detailing the specific examples of designing clothing and hair for a virtual character. These examples, for which several different sketch-based reconstruction techniques are presented, will help us characterize the prior knowledge that can be exploited when reconstructing a complex model from a sketch, from basic rules of thumb to more intricate geometric or physically-based properties. This will provide the basis for a general methodology for sketch-based interfaces for complex models. Let us first emphasize the usefulness of sketch-based modeling for the specific applications described in this chapter. Modeling a garment or a hairstyle for a given character is tedious using a standard modeling system. Usually it is done in one of two ways: Geometrically: asking a computer artist to design the garment or hairstyle shape geometrically, such as manually modeling the shape of the garment mesh with all the folds that will make it look natural, or creating and shaping the hundreds of generalized cylinders representing the hair wisps of the character (a long process even with the multiresolution editing and style copy/past techniques of [12]). In these cases, the user gets no help from the system (the level of realism will only depend on his or her skill); animating this garment or hair will be difficult, since they are not the rest position of a physically-based model; and lastly, the same process will need to be started from scratch if another piece of clothing or another hairstyle needs to be modeled. Using physically-based modeling, which guarantees some degree of realism and eases subsequent animation: for garments, systems such as Maya nCloth [4] are based on the fact that a garment is a set of flat patterns sewn together, which fold due to gravity and due to collisions with the character's body. In this case the user requires some skill in tailoring in order to design and position the patterns, before a physically-based simulation is applied to compute the garment's shape; similarly for hair, using a physically-based model is possible [5] but then the designer requires hair-dressing skills since the hair will need to be wetted, cut, and shaped before obtaining the desired hairstyle. Whichever method is used, computer artists typically spend hours designing a garment or a hairstyle. In contrast, the sketch-based interfaces presented below enable the creation of a variety of clothing and hairstyles in minutes, using intuitive sketching and annotation techniques which leverage the existing sketching skills of the artist. The remainder of this chapter presents different solutions to sketch-based clothing and hairstyling, classified according to the nature of the prior knowledge they rely on: Sect. 14.2 presents a simple method for generating a plausible 3D garment from silhouettes and fold lines sketched over a front (and optionally back) view of a mannequin. The method for inferring 3D then simply expresses our basic understanding when we see such a sketch. Section 14.3 compares two solutions for incorporating some prior geometric knowledge, namely using the fact that a garment is a piecewise developable surface, made by assembling a set of 2D patterns; the associated folds can then be generated either procedurally or using physicallybased simulation. Section 14.4 illustrates the case when a full procedural model of the object in question is available, here a static physically-based model for hair. The sketching interface can then be seen as a way to offer quick and intuitive control over the parameters that indirectly shape the model. Finally, Sect. 14.5 summarizes and discusses the general methodology used in these systems, namely combining procedural modeling with sketch-based interfaces to quickly design complex models.

Original languageEnglish
Title of host publicationSketch-based Interfaces and Modeling
PublisherSpringer London
Pages369-395
Number of pages27
ISBN (Print)9781848828117
DOIs
Publication statusPublished - 1 Dec 2011
Externally publishedYes

Fingerprint

Dive into the research topics of 'Dressing and hair-styling virtual characters from a sketch'. Together they form a unique fingerprint.

Cite this