Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Based on the cognition of the physiological structure of the human visual system, this paper considers that the mechanism of the human visual system to perceive the image color appearance includes the adaptive mechanism of the retinal photoreceptor to the ambient light and the spatial frequency response mechanism of the Neuronal receptive field in the visual pathway. In this paper, we first provide the computing framework of the image color appearance model related to human cognitive process, then propose using Gabor wavelet as the basis function of the visual nerve cells response to apply CIECAT02 model to the calculation of image color adaption and to simulate the multi-scale superposition of human visual spatial frequency tuning curves, and finally accomplish the development of the algorithm for predicting image color appearance. The results show that the prediction algorithm proposed in this paper is closer to the visual perception of human eyes than the similar algorithm.
Color images depend on the color of the capture illuminant and object reflectance. As such image colors are not stable features for object recognition, however stability is necessary since perceived colors (the colors we see) are illuminant independent and do correlate with object identity. Before the colors in images can be compared, they must first be preprocessed to remove the effect of illumination. Two types of preprocessing have been proposed: first, run a color constancy algorithm or second apply an invariant normalization. In color constancy preprocessing the illuminant color is estimated and then, at a second stage, the image colors are corrected to remove color bias due to illumination. In color invariant normalization image RGBs are redescribed, in an illuminant independent way, relative to the context in which they are seen (e.g. RGBs might be divided by a local RGB average). In theory the color constancy approach is superior since it works in a scene independently: color invariant normalization can be calculated post-color constancy but the converse is not true. However, in practice color invariant normalization usually supports better indexing. In this paper we ask whether color constancy algorithms will ever deliver better indexing than color normalization. The main result of this paper is to demonstrate equivalence between color constancy and color invariant computation.
The equivalence is empirically derived based on color object recognition experiments. colorful objects are imaged under several different colors of light. To remove dependency due to illumination these images are preprocessed using either a perfect color constancy algorithm or the comprehensive color image normalization. In the perfect color constancy algorithm the illuminant is measured rather than estimated. The import of this is that the perfect color constancy algorithm can determine the actual illuminant without error and so bounds the performance of all existing and future algorithms. Post-color constancy or color normalization processing, the color content is used as cue for object recognition. Counter-intuitively perfect color constancy does not support perfect recognition. In comparison the color invariant normalization does deliver near-perfect recognition. That the color constancy approach fails implies that the scene effective illuminant is different from the measured illuminant. This explanation has merit since it is well known that color constancy is more difficult in the presence of physical processes such as fluorescence and mutual illumination. Thus, in a second experiment, image colors are corrected based on a scene dependent "effective illuminant". Here, color constancy preprocessing facilitates near-perfect recognition. Of course, if the effective light is scene dependent then optimal color constancy processing is also scene dependent and so, is equally a color invariant normalization.
The use of color in computer vision has received growing attention. This chapter introduces the basic principles underlying the physics and perception of color and reviews the state-of-the-art in color vision algorithms. Parts of this chapter have been condensed from [58] while new material has been included which provides a critical review of recent work. In particular, research in the areas of color constancy and color segmentation is reviewed in detail.
The first section reviews physical models for color image formation as well as models for human color perception. Reflection models characterize the relationship between a surface, the illumination environment, and the resulting color image. Physically motivated linear models are used to approximate functions of wavelength using a small number of parameters. Reflection models and linear models are introduced in Section 1 and play an important role in several of the color constancy and color segmentation algorithms presented in Sections 2 and 3. For completeness, we also present a concise summary of the trichromatic theory which models human color perception. A discussion is given of color matching experiments and the CIE color representation system. These models are important for a wide range of applications including the consistent representation of color on different devices. Section 1 concludes with a description of the most widely used color spaces and their properties.
The second section considers progress on computational approaches to color constancy. Human vision exhibits color constancy as the ability to perceive stable surface colors for a fixed object under a wide range of illumination conditions and scene configurations. A similar ability is required if computer vision systems are to recognize objects in uncontrolled environments. We begin by reviewing the properties and limitations of the early retinex approach to color constancy. We describe in detail the families of linear model algorithms and highlight algorithms which followed. Section 2 concludes with a subsection on recent indexing methods which integrate color constancy with the higher level recognition process.
Section 3 addresses the use of color for image segmentation and stresses the role of image models. We start by presenting classical statistical approaches to segmentation which have been generalized to include color. The more recent emphasis on the use of physical models for segmentation has led to new classes of algorithms which enable the accurate segmentation of effects such as shadows, highlights, shading, and interreflection. Such effects are often a source of error for algorithms based on classical statistical models. Finally, we describe a color texture model which has been used successfully as the basis of an algorithm for segmenting images of natural outdoor scenes.
A color cast correction algorithm based on improved Frankle-McCann Retinex is proposed to correct images which are influenced by illumination. To improve on the original algorithm, the distance-weighting factor with Gauss function is introduced, and a linear stretch with the mean and the standard deviation is carried out. Experimental results demonstrate that the algorithm in this paper has improved correction effect on the color cast image.
The use of color in computer vision has received growing attention. This chapter gives the state-of-the-art in this subfield, and tries to answer the questions: What is color? Which are the adequate representations? How is it computed? What can be done using it?
The first section introduces some basic tools and models that can be used to describe the color imaging process. We first summarize the classical photometric and colorimetric notions: light measurement, intensity equation, color signal, color perception, trichromatic theory. The growing interest in color during the last few years comes from two new classes of models of reflection, physical models and linear models, which lead to highlight algorithms as well as color constancy algorithms. We present these models in detail and discuss some of their limitations.
The second section deals with the problem of color constancy. The term “color constancy” refers to the fact that the colors perceived by humans in real scenes are relatively stable under large variations of illumination and of material composition of scenes. From a computational standpoint, achieving color constancy is an underdetermined problem: computing the spectral reflectance from the sensor measurements. We compare three classes of color constancy algorithms, based on lightness computation, linear models, and physical models, respectively. For each class, the principle is explained, and one or two significant algorithms are given. A comparative study serves to introduce the others.
The third section is concerned with the use of color in universal, i.e. mainly low-level, vision tasks. We emphasize the distinction between tasks that have been extensively studied in monochromatic images and for which the contribution of color is just a quantitative generalization, and tasks where color has a qualitative role. In the first case, additional image features are obtained, and have to be represented and used efficiently. In the latter case, it is hoped that color can help recover intrinsic physical properties of scenes. We study successively three important themes in computer vision: edges, segmentation, matching. For each of them, we present the two frameworks for the use of color.