World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×
Spring Sale: Get 35% off with a min. purchase of 2 titles. Use code SPRING35. Valid till 31st Mar 2025.

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

Texture for Appearance Models in Computer Vision and Graphics

    https://doi.org/10.1142/9781848161160_0008Cited by:1 (Source: Crossref)
    Abstract:

    Appearance modeling is fundamental to the goals of computer vision and computer graphics. Traditionally, appearance was modeled with simple shading models (e.g. Lambertian or specular) applied to known or estimated surface geometry. However, real world surfaces such as hair, skin, fur, gravel, scratched or weathered surfaces, are difficult to model with this approach for a variety of reasons. In some cases it's not practical to obtain geometry because the variation is so complex and fine-scale. The geometric detail is not resolved with laser scanning devices or with stereo vision. Simple reflectance models assume that all light is reflected from the point where it hits the surface, i.e. no light is transmitted into the surface. But in many real surfaces, a portion of the light incident on one surface point is scattered beneath the surface and exits at other surface points. This subsurface scattering causes difficulties in accurately modeling a surface such as frosted glass or skin with a simple geometry plus shading model. So even when a precise geometric profile is attainable, applying a pointwise shading model is not sufficient. Because of these issues, image-based modeling has become a popular alternative to modeling with geometry and point-wise shading.

    Real world surfaces are often textured with a variation in color (as in a paisley print or leopard spots) or a fine-scale surface height variation (e.g. crumpled paper, rough plaster, sand). Surface texture complicates appearance prediction because local shading, shadowing, foreshortening and occlusions change the observed appearance when lighting or viewing directions have changed. As an example, consider a globally planar surface of wrinkled leather where large local shadows appear when the surface is obliquely illuminated and disappear when the surface is frontally illuminated. Accounting for the variation of appearance due to changes in imaging parameters is a key issue in developing accurate models. The terms BRDF and BTF have been used to describe surface appearance. The BRDF (bidirectional reflectance distribution function) describes surface reflectance as a function of viewing and illumination angles. Since surface reflectance varies spatially for textured surfaces, the BTF was introduced to add a spatial variation. More specifically, the bidirectional texture function (BTF) is observed image texture as a function of viewing and illumination directions. In this chapter, topics in BRDF and BTF modeling for vision and graphics are presented. Two methods for recognition are described in detail: (1) bidirectional feature histograms and (2) symbolic primitives that are more useful for recognizing subtle differences in texture.