Search name | Searched On | Run search |
---|---|---|
[Keyword: Color] AND [All Categories: Computer Science] (15) | 29 Mar 2025 | Run |
[Keyword: Shape] AND [All Categories: Mathematical Economics / Game Theory / Econom... (1) | 29 Mar 2025 | Run |
[in Journal: International Journal of Software Engineering and Knowledge Engineerin... (1) | 29 Mar 2025 | Run |
[Keyword: Shape] AND [All Categories: Engineering] (8) | 29 Mar 2025 | Run |
[Keyword: LIPSS] AND [All Categories: Surface Science] (1) | 29 Mar 2025 | Run |
You do not have any saved searches
Color has an important role in object recognition and visual working memory (VWM). Decoding color VWM in the human brain is helpful to understand the mechanism of visual cognitive process and evaluate memory ability. Recently, several studies showed that color could be decoded from scalp electroencephalogram (EEG) signals during the encoding stage of VWM, which process visible information with strong neural coding. Whether color could be decoded from other VWM processing stages, especially the maintaining stage which processes invisible information, is still unknown. Here, we constructed an EEG color graph convolutional network model (ECo-GCN) to decode colors during different VWM stages. Based on graph convolutional networks, ECo-GCN considers the graph structure of EEG signals and may be more efficient in color decoding. We found that (1) decoding accuracies for colors during the encoding, early, and late maintaining stages were 81.58%, 79.36%, and 77.06%, respectively, exceeding those during the pre-stimuli stage (67.34%), and (2) the decoding accuracy during maintaining stage could predict participants’ memory performance. The results suggest that EEG signals during the maintaining stage may be more sensitive than behavioral measurement to predict the VWM performance of human, and ECo-GCN provides an effective approach to explore human cognitive function.
An object-based image retrieval method is addressed in this paper. For that purpose, a new image segmentation algorithm and image comparing method between segmented objects are proposed. For image segmentation, color and textural features are extracted from each pixel in the image and these features are used as inputs into VQ (Vector Quantization) clustering method, which yields homogeneous objects in terms of color and texture. In this procedure, colors are quantized into a few dominant colors for simple representation and efficient retrieval. In the retrieval case, two comparing schemes are proposed. Comparisons between one query object and multi-objects of a database image and comparisons between multi-query objects and multi-objects of a database image are proposed. For fast retrieval, dominant object colors are key-indexed into the database.
An approach to integrating the global and local kernel-based automated analysis of vocal fold images aiming to categorize laryngeal diseases is presented in this paper. The problem is treated as an image analysis and recognition task. A committee of support vector machines is employed for performing the categorization of vocal fold images into healthy, diffuse and nodular classes. Analysis of image color distribution, Gabor filtering, cooccurrence matrices, analysis of color edges, image segmentation into homogeneous regions from the image color, texture and geometry view point, analysis of the soft membership of the regions in the decision classes, the kernel principal components based feature extraction are the techniques employed for the global and local analysis of laryngeal images. Bearing in mind the high similarity of the decision classes, the correct classification rate of over 94% obtained when testing the system on 785 vocal fold images is rather encouraging.
This paper proposes the utility of texture and color for iris recognition systems. It contributes for improvement of system accuracy with reduced feature vector size of just 1 × 3 and reduction of false acceptance rate (FAR) and false rejection rate (FRR). It avoids the iris normalization process used traditionally in iris recognition systems. Proposed method is compared with the existing methods. Experimental results indicate that the proposed method using only color achieves 99.9993 accuracy, 0.0160 FAR, and 0.0813 FRR. Computational time efficiency achieved is of 947.7 ms.
Color images depend on the color of the capture illuminant and object reflectance. As such image colors are not stable features for object recognition, however stability is necessary since perceived colors (the colors we see) are illuminant independent and do correlate with object identity. Before the colors in images can be compared, they must first be preprocessed to remove the effect of illumination. Two types of preprocessing have been proposed: first, run a color constancy algorithm or second apply an invariant normalization. In color constancy preprocessing the illuminant color is estimated and then, at a second stage, the image colors are corrected to remove color bias due to illumination. In color invariant normalization image RGBs are redescribed, in an illuminant independent way, relative to the context in which they are seen (e.g. RGBs might be divided by a local RGB average). In theory the color constancy approach is superior since it works in a scene independently: color invariant normalization can be calculated post-color constancy but the converse is not true. However, in practice color invariant normalization usually supports better indexing. In this paper we ask whether color constancy algorithms will ever deliver better indexing than color normalization. The main result of this paper is to demonstrate equivalence between color constancy and color invariant computation.
The equivalence is empirically derived based on color object recognition experiments. colorful objects are imaged under several different colors of light. To remove dependency due to illumination these images are preprocessed using either a perfect color constancy algorithm or the comprehensive color image normalization. In the perfect color constancy algorithm the illuminant is measured rather than estimated. The import of this is that the perfect color constancy algorithm can determine the actual illuminant without error and so bounds the performance of all existing and future algorithms. Post-color constancy or color normalization processing, the color content is used as cue for object recognition. Counter-intuitively perfect color constancy does not support perfect recognition. In comparison the color invariant normalization does deliver near-perfect recognition. That the color constancy approach fails implies that the scene effective illuminant is different from the measured illuminant. This explanation has merit since it is well known that color constancy is more difficult in the presence of physical processes such as fluorescence and mutual illumination. Thus, in a second experiment, image colors are corrected based on a scene dependent "effective illuminant". Here, color constancy preprocessing facilitates near-perfect recognition. Of course, if the effective light is scene dependent then optimal color constancy processing is also scene dependent and so, is equally a color invariant normalization.
Paintings convey the composition and characteristics of artists; therefore, it is possible to feel the intended style of painting and emotion of each artist through their paintings. In general, basic elements that constitute traditional paintings are color, texture, and composition (formative elements constituting the paintings are color and shape); however, color is the most crucial element expressing the emotion of a painting. In particular, traditional colors manifest the color containing historicity of the era, so the color shown in painting images is considered a representative color of the culture to which the painting belongs. This study constructed a color emotional system by analyzing colors and rearranged color emotion adjectives based on color combination techniques and clustering algorithm proposed by Kobayashi as well as I.R.I HUE & TONE 120 System. Based on the embodied color emotion system, this study confirmed classified emotions of images by extracting and classifying emotions from traditional Korean painted images, focusing on traditional painted images of the late Joseon Dynasty. Moreover, it was possible to verify the cultural traits of the era through the classified emotion images.
This paper presents a graph-based ordering scheme of color vectors. A complete graph is defined over a filter window and its structure is analyzed to construct an ordering of color vectors. This graph-based ordering is constructed by finding a Hamiltonian path across the color vectors of a filter window by a two-step algorithm. The first step extracts, by decimating a minimum spanning tree, the extreme values of the color set. These extreme values are considered as the infimum and the supremum of the set of color vectors. The second step builds an ordering by constructing a Hamiltonian path among the vectors of color vectors, starting from the infimum and ending at the supremum. The properties of the proposed graph-based ordering of vectors are detailed. Several experiments are conducted to assess its filtering abilities for morphological and median filtering.
In this paper, we present two steps in the process of automatic annotation in archeological images. These steps are feature extraction and feature selection. We focus our research on archeological images which are very much studied in our days. It presents the most important steps in the process of automatic annotation in an image. Feature extraction techniques are applied to get the feature that will be used in classifying and recognizing the images. Also, the selection of characteristics reduces the number of unattractive characteristics. However, we reviewed various images of feature extraction techniques to analyze the archaeological images. Each feature represents one or more feature descriptors in the archeological images. We focus on the descriptor shape of the archaeological objects extraction in the images using contour method-based shape recognition of the monuments. So, the feature selection stage serves to acquire the most interesting characteristics to improve the accuracy of the classification. In the feature selection section, we present a comparative study between feature selection techniques. Then we give our proposal of application of methods of selection of the characteristics of the archaeological images. Finally, we calculate the performance of two steps already mentioned: the extraction of characteristics and the selection of characteristics.
The recently proposed color based tracking systems are unable to properly adapt the ellipse that represents an object to be tracked. This most likely leads to inaccurate descriptions of the object in the later application. This paper presents a Lagrangian based method in order to discover a regularizing component for the covariance matrix. Technically, we intend to reduce the residuals between the estimated probability distribution and the expected one. We argue that, by doing this, the shape of the ellipse can be properly adapted in the tracking stage. Experimental results show that the proposed method has favorable performance in shape adaption and object localization.
Nutrition is an essential component in agriculture worldwide to assure high and consistent crop yields. The leaves frequently present signs of nutritional deficiencies in rice crops. A nutritional deficiency in the rice plant can also be diagnosed based on the leaf color and form. Image categorization is an effective and rapid method for analyzing such conditions. However, despite significant success in image classification, Ensemble Learning (EL) has remained elusive in paddy nutrition analysis. Ensemble learning is a technique for deliberately constructing and combining numerous classifier models to tackle a specific computational issue. In this work, we investigate the preciseness of several uncertain deep learning algorithms to detect nutritional deficits in rice leaves. Through soil and agricultural studies, around 2000 images of rice plant leaves were collected encompassing complete nutritional and about five divisions of nutrient deficiencies. The image proportion for learning via training, validation via evaluation, and testing phase were split into 4: 2: 2. For this, an EL method is chosen for the diagnosis and classification of nutritional deficits. Here, EL procedures are considered as a hybrid classification model that integrates CapsNET (Capsule network) and GCN (Graph Convolutional Neural) networks to evaluate the classification. The hybrid classification effectiveness was verified through color and lesion features which were compared with standard machine learning techniques. This research shows that EL strategies can effectively detect nutritional deficits in paddy. Furthermore, the suggested hybrid classification model achieved a better accuracy rate, along with sensitivity and specificity rates of 97.13%, 97.22%, and 96.47% correspondingly.
The use of color in computer vision has received growing attention. This chapter introduces the basic principles underlying the physics and perception of color and reviews the state-of-the-art in color vision algorithms. Parts of this chapter have been condensed from [58] while new material has been included which provides a critical review of recent work. In particular, research in the areas of color constancy and color segmentation is reviewed in detail.
The first section reviews physical models for color image formation as well as models for human color perception. Reflection models characterize the relationship between a surface, the illumination environment, and the resulting color image. Physically motivated linear models are used to approximate functions of wavelength using a small number of parameters. Reflection models and linear models are introduced in Section 1 and play an important role in several of the color constancy and color segmentation algorithms presented in Sections 2 and 3. For completeness, we also present a concise summary of the trichromatic theory which models human color perception. A discussion is given of color matching experiments and the CIE color representation system. These models are important for a wide range of applications including the consistent representation of color on different devices. Section 1 concludes with a description of the most widely used color spaces and their properties.
The second section considers progress on computational approaches to color constancy. Human vision exhibits color constancy as the ability to perceive stable surface colors for a fixed object under a wide range of illumination conditions and scene configurations. A similar ability is required if computer vision systems are to recognize objects in uncontrolled environments. We begin by reviewing the properties and limitations of the early retinex approach to color constancy. We describe in detail the families of linear model algorithms and highlight algorithms which followed. Section 2 concludes with a subsection on recent indexing methods which integrate color constancy with the higher level recognition process.
Section 3 addresses the use of color for image segmentation and stresses the role of image models. We start by presenting classical statistical approaches to segmentation which have been generalized to include color. The more recent emphasis on the use of physical models for segmentation has led to new classes of algorithms which enable the accurate segmentation of effects such as shadows, highlights, shading, and interreflection. Such effects are often a source of error for algorithms based on classical statistical models. Finally, we describe a color texture model which has been used successfully as the basis of an algorithm for segmenting images of natural outdoor scenes.
In this paper, we present an image retrieval system based on the content. The content of images include both low level features such as colors, textures, and high level features such as spatial constraints and shapes of relevant regions. Based on object technology, the image features and behaviors are modeled and stored in a database. Images can be retrieved by examples (show me images similar to this image) or by selecting properties from pickers such as a sketched shape, a color histogram, a spatial constraint interface, a list of key words and a combination of these. The integration of high and low level features in the object-oriented database is an important property of our work.
In this paper, the elements of clothing color design were discussed, analysed and decomposed as hue, value and chroma, and then the 96-kind-color sample base was obtained. The knowledge related to clothing color design includes two parts: design knowledge which was about the design rules and theories, and sensory knowledge which showed the relations between the color samples and meanings of adjective word pairs in sensory image base. All these knowledge could be elicited and acquired from professional persons by interviewing, card sorting and fuzzy clustering methods etc. According to the customer's character, the principle TPO (time, place and occasion) and the beauty rule, the improper colors were deleted from color sample base first, and the remainders were clustered by using transfer closure based on fuzzy equivalence matrix. An application framework was advanced to show the whole process of knowledge use.
The use of color in computer vision has received growing attention. This chapter gives the state-of-the-art in this subfield, and tries to answer the questions: What is color? Which are the adequate representations? How is it computed? What can be done using it?
The first section introduces some basic tools and models that can be used to describe the color imaging process. We first summarize the classical photometric and colorimetric notions: light measurement, intensity equation, color signal, color perception, trichromatic theory. The growing interest in color during the last few years comes from two new classes of models of reflection, physical models and linear models, which lead to highlight algorithms as well as color constancy algorithms. We present these models in detail and discuss some of their limitations.
The second section deals with the problem of color constancy. The term “color constancy” refers to the fact that the colors perceived by humans in real scenes are relatively stable under large variations of illumination and of material composition of scenes. From a computational standpoint, achieving color constancy is an underdetermined problem: computing the spectral reflectance from the sensor measurements. We compare three classes of color constancy algorithms, based on lightness computation, linear models, and physical models, respectively. For each class, the principle is explained, and one or two significant algorithms are given. A comparative study serves to introduce the others.
The third section is concerned with the use of color in universal, i.e. mainly low-level, vision tasks. We emphasize the distinction between tasks that have been extensively studied in monochromatic images and for which the contribution of color is just a quantitative generalization, and tasks where color has a qualitative role. In the first case, additional image features are obtained, and have to be represented and used efficiently. In the latter case, it is hoped that color can help recover intrinsic physical properties of scenes. We study successively three important themes in computer vision: edges, segmentation, matching. For each of them, we present the two frameworks for the use of color.
The application of color in information graphics is important and useful. When compared with text, color not only brings richly visual enjoyment to readers, but also transfers a great quantity of information quickly and effectively. In this paper, we integrate the pattern of information transmission in nature, which is color union and color comparison, into our analysis. This pattern has been used to illustrate the importance and effect of color in information graphics. Concrete examples with special color are used to show the efficiency of information transmission through color.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.