SISTEM SKETSA GENERATIF PEMODELAN RAMBUT MANUSIA
DOI:
https://doi.org/10.29103/techsi.v4i1.105Abstract
In this paper, we present a generative sketch model for human hair analysis and synthesis. We treat hair images as2D piecewise smooth vector (flow) fields and, thus, our representation is view-based in contrast to the physically-based 3D hair models in graphics. The generative model has three levels. The bottom level is the high-frequency band of the hair image. The middle level is a piecewise smooth vector field for the hair orientation, gradient strength, and growth directions. The top level is an attribute sketch graph for representing the discontinuities in the vector field.Asketch graph typically has a number of sketch curves which are divided into 11 types of directed primitives. Each primitive is a small window (say 5 _ 7 pixels) where the orientations and growth directions are defined in parametric forms, for example, hair boundaries, occluding lines between hair strands, dividing lines on top of the hair, etc. In addition to the three level representation, we model the shading effects, i.e., the low-frequency band of the hair image, by a linear superposition of some Gaussian image bases andwe encode the hair color by a color map. The inference algorithm is divided into two stages: 1) We compute the undirected orientation field and sketch graph from an input image and 2) we compute the hair growth direction for the sketch curves and the orientation field using a Swendsen-Wang cut algorithm. Both steps maximize a joint Bayesian posterior probability. The generative model provides a straightforward way for synthesizing realistic hair images and stylistic drawings (rendering) from a sketch graph and a few Gaussian bases. The latter can be either inferred from a real hair image or input(edited) manually using a simple sketching interface. We test our algorithm on a large data set of hair images with diverse hair styles.References
. Barbu and S.C. Zhu, Graph Partition by Swendsen-Wang
Cuts, Proc. Intl Conf. Computer Vision, pp. 320-327, 2003.
. J.R. Bergen and E.H. Adelson, Theories of Visual Texture
Perception, Spatial Vision, 1991.
. B. Cabral and L.C. Leedom, Imaging Vector Fields Using
Integral Convolution, Proc. 20th Conf. Computer Graphics
and Interactive Techniques, pp. 263-270, 1993.
. T. Chan and J.H. Shen, Variational Restoration of Non-Flat
Image Features: Models and Algorithms, SIAM J. Applied
Math, vol. 61, pp. 1338-1361, 2001.
. J.T. Chang, J.Y. Jin, and Y.Z. Yu, A Practical Model for Hair
. Mutual Interactions, Proc. Siggraph/Eurographics Symp.
Computer Animation, 2002.
. H. Chen, Z.Q. Liu, C. Rose, Y.Q. Xu, H.Y. Shum, and D.
Salesin, Example-Based Composite Sketching of Human
Portraits, Proc. Third Intl Symp. Non-Photorealistic
Animation and Rendering, pp. 95-153, 2004.
. A. Daldengan, N.M. Thalmann, T. Kurihara, and D.
Thalmann, An Integrated System for Modeling, Animation
and Rendering Hair, Proc. Computer Graphics Forum
(Eurographics 93), pp. 211-221, 1993.
. S. Geman and D. Geman, Stochastic Relaxation, Gibbs
Distributions, and the Bayesian Restoration of Images, IEEE
Trans. Pattern Analysis and Machine Intelligence, vol. 6, no.
, pp. 721-741, Nov. 1984.
. B. Gooch and A. Gooch, Non-Photorealistic Rendering. A.K.
Peters, Ltd., 2001.
Downloads
Published
Issue
Section
License
Authors retain copyright and grant the journal right of first publication and this work is licensed under a Creative Commons Attribution-ShareAlike 4.0 that allows others to share the work with an acknowledgement of the works authorship and initial publication in this journal.
All articles in this journal may be disseminated by listing valid sources and the title of the article should not be omitted. The content of the article is liable to the author.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
In the dissemination of articles by the author must declare the TECHSI Journal as the first party to publish the article.
