SISTEM SKETSA GENERATIF PEMODELAN RAMBUT MANUSIA

Sayed Fachrurrazi, Fadlisyah Fadlisyah

Abstract


In this paper, we present a generative sketch model for human hair analysis and synthesis. We treat hair images as2D piecewise smooth vector (flow) fields and, thus, our representation is view-based in contrast to the physically-based 3D hair models in graphics. The generative model has three levels. The bottom level is the high-frequency band of the hair image. The middle level is a piecewise smooth vector field for the hair orientation, gradient strength, and growth directions. The top level is an attribute sketch graph for representing the discontinuities in the vector field.Asketch graph typically has a number of sketch curves which are divided into 11 types of directed primitives. Each primitive is a small window (say 5 _ 7 pixels) where the orientations and growth directions are defined in parametric forms, for example, hair boundaries, occluding lines between hair strands, dividing lines on top of the hair, etc. In addition to the three level representation, we model the shading effects, i.e., the low-frequency band of the hair image, by a linear superposition of some Gaussian image bases andwe encode the hair color by a color map. The inference algorithm is divided into two stages: 1) We compute the undirected orientation field and sketch graph from an input image and 2) we compute the hair growth direction for the sketch curves and the orientation field using a Swendsen-Wang cut algorithm. Both steps maximize a joint Bayesian posterior probability. The generative model provides a straightforward way for synthesizing realistic hair images and stylistic drawings (rendering) from a sketch graph and a few Gaussian bases. The latter can be either inferred from a real hair image or input(edited) manually using a simple sketching interface. We test our algorithm on a large data set of hair images with diverse hair styles.

Full Text:

PDF

References


. Barbu and S.C. Zhu, “Graph Partition by Swendsen-Wang

Cuts,” Proc. Int’l Conf. Computer Vision, pp. 320-327, 2003.

. J.R. Bergen and E.H. Adelson, “Theories of Visual Texture

Perception,” Spatial Vision, 1991.

. B. Cabral and L.C. Leedom, “Imaging Vector Fields Using

Integral Convolution,” Proc. 20th Conf. Computer Graphics

and Interactive Techniques, pp. 263-270, 1993.

. T. Chan and J.H. Shen, “Variational Restoration of Non-Flat

Image Features: Models and Algorithms,” SIAM J. Applied

Math, vol. 61, pp. 1338-1361, 2001.

. J.T. Chang, J.Y. Jin, and Y.Z. Yu, “A Practical Model for Hair

. Mutual Interactions,” Proc. Siggraph/Eurographics Symp.

Computer Animation, 2002.

. H. Chen, Z.Q. Liu, C. Rose, Y.Q. Xu, H.Y. Shum, and D.

Salesin, “Example-Based Composite Sketching of Human

Portraits,” Proc. Third Int’l Symp. Non-Photorealistic

Animation and Rendering, pp. 95-153, 2004.

. A. Daldengan, N.M. Thalmann, T. Kurihara, and D.

Thalmann, “An Integrated System for Modeling, Animation

and Rendering Hair,” Proc. Computer Graphics Forum

(Eurographics ’93), pp. 211-221, 1993.

. S. Geman and D. Geman, “Stochastic Relaxation, Gibbs

Distributions, and the Bayesian Restoration of Images,” IEEE

Trans. Pattern Analysis and Machine Intelligence, vol. 6, no.

, pp. 721-741, Nov. 1984.

. B. Gooch and A. Gooch, Non-Photorealistic Rendering. A.K.

Peters, Ltd., 2001.




DOI: https://doi.org/10.29103/techsi.v4i1.105

Article Metrics

 Abstract Views : 268 times
 PDF Downloaded : 8 times

Refbacks

  • There are currently no refbacks.


Copyright (c) 2012 Sayed Fachrurrazi, Fadlisyah Fadlisyah

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

 


Indexed by:

          

Google Scholar
   
 

 


© Copyright of Journal TECHSI, (e-ISSN:2614-6029, p-ISSN:2302-4836).

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.