Color has an important role in object recognition and visual working memory (VWM). Decoding color VWM in the human brain is helpful to understand the mechanism of visual cognitive process and evaluate memory ability. Recently, several studies showed that color could be decoded from scalp electroencephalogram (EEG) signals during the encoding stage of VWM, which process visible information with strong neural coding. Whether color could be decoded from other VWM processing stages, especially the maintaining stage which processes invisible information, is still unknown. Here, we constructed an EEG color graph convolutional network model (ECo-GCN) to decode colors during different VWM stages. Based on graph convolutional networks, ECo-GCN considers the graph structure of EEG signals and may be more efficient in color decoding. We found that (1) decoding accuracies for colors during the encoding, early, and late maintaining stages were 81.58%, 79.36%, and 77.06%, respectively, exceeding those during the pre-stimuli stage (67.34%), and (2) the decoding accuracy during maintaining stage could predict participants’ memory performance. The results suggest that EEG signals during the maintaining stage may be more sensitive than behavioral measurement to predict the VWM performance of human, and ECo-GCN provides an effective approach to explore human cognitive function.
The Fierz transformations (Ft) for SU(2) and SU(3) operators in eighth-order fermion interactions are derived. These terms appear in expansions in terms of fermion currents of the following form: I1 I2 I3 I4 and also the composite , where Γ(i) are either the SU(2) Pauli matrices or SU(3) Gell-Mann matrices. The calculation is carried out using the exchange projector operators.
Historically, although the first indications for the use of lasers in general were in dentistry, coming as a relief from the sound of the drill and mechanical contacts, it still seems somewhat that the entry in various ways of lasers in dentistry has been slower. This is somewhat true for the situation at the continents (e.g. USA much later approved the application relative to Europe). This paper analyzes the potential and existing applications of lasers in dentistry in a wide range of existing types, including interaction with dental tissues, in terms of surgical applications, on living tissue, the prosthetic area of applications and therapeutic doses. There is another special feature that can be recognized is the precise determination of the color of the material (teeth and prosthetics) and in general the determination of the composition of the material including classic, but also modern laser techniques (LIBS, complimentary techniques, tooth tissue, bone) and especially in the case of the first Q switch systems related to pain reduction, because the short pulse favors the intervention rate (ns, ps and fs). Special attention should be paid to modeling of interaction and analysis with the appropriate software support.
An object-based image retrieval method is addressed in this paper. For that purpose, a new image segmentation algorithm and image comparing method between segmented objects are proposed. For image segmentation, color and textural features are extracted from each pixel in the image and these features are used as inputs into VQ (Vector Quantization) clustering method, which yields homogeneous objects in terms of color and texture. In this procedure, colors are quantized into a few dominant colors for simple representation and efficient retrieval. In the retrieval case, two comparing schemes are proposed. Comparisons between one query object and multi-objects of a database image and comparisons between multi-query objects and multi-objects of a database image are proposed. For fast retrieval, dominant object colors are key-indexed into the database.
An approach to integrating the global and local kernel-based automated analysis of vocal fold images aiming to categorize laryngeal diseases is presented in this paper. The problem is treated as an image analysis and recognition task. A committee of support vector machines is employed for performing the categorization of vocal fold images into healthy, diffuse and nodular classes. Analysis of image color distribution, Gabor filtering, cooccurrence matrices, analysis of color edges, image segmentation into homogeneous regions from the image color, texture and geometry view point, analysis of the soft membership of the regions in the decision classes, the kernel principal components based feature extraction are the techniques employed for the global and local analysis of laryngeal images. Bearing in mind the high similarity of the decision classes, the correct classification rate of over 94% obtained when testing the system on 785 vocal fold images is rather encouraging.
This paper proposes the utility of texture and color for iris recognition systems. It contributes for improvement of system accuracy with reduced feature vector size of just 1 × 3 and reduction of false acceptance rate (FAR) and false rejection rate (FRR). It avoids the iris normalization process used traditionally in iris recognition systems. Proposed method is compared with the existing methods. Experimental results indicate that the proposed method using only color achieves 99.9993 accuracy, 0.0160 FAR, and 0.0813 FRR. Computational time efficiency achieved is of 947.7 ms.
Color images depend on the color of the capture illuminant and object reflectance. As such image colors are not stable features for object recognition, however stability is necessary since perceived colors (the colors we see) are illuminant independent and do correlate with object identity. Before the colors in images can be compared, they must first be preprocessed to remove the effect of illumination. Two types of preprocessing have been proposed: first, run a color constancy algorithm or second apply an invariant normalization. In color constancy preprocessing the illuminant color is estimated and then, at a second stage, the image colors are corrected to remove color bias due to illumination. In color invariant normalization image RGBs are redescribed, in an illuminant independent way, relative to the context in which they are seen (e.g. RGBs might be divided by a local RGB average). In theory the color constancy approach is superior since it works in a scene independently: color invariant normalization can be calculated post-color constancy but the converse is not true. However, in practice color invariant normalization usually supports better indexing. In this paper we ask whether color constancy algorithms will ever deliver better indexing than color normalization. The main result of this paper is to demonstrate equivalence between color constancy and color invariant computation.
The equivalence is empirically derived based on color object recognition experiments. colorful objects are imaged under several different colors of light. To remove dependency due to illumination these images are preprocessed using either a perfect color constancy algorithm or the comprehensive color image normalization. In the perfect color constancy algorithm the illuminant is measured rather than estimated. The import of this is that the perfect color constancy algorithm can determine the actual illuminant without error and so bounds the performance of all existing and future algorithms. Post-color constancy or color normalization processing, the color content is used as cue for object recognition. Counter-intuitively perfect color constancy does not support perfect recognition. In comparison the color invariant normalization does deliver near-perfect recognition. That the color constancy approach fails implies that the scene effective illuminant is different from the measured illuminant. This explanation has merit since it is well known that color constancy is more difficult in the presence of physical processes such as fluorescence and mutual illumination. Thus, in a second experiment, image colors are corrected based on a scene dependent "effective illuminant". Here, color constancy preprocessing facilitates near-perfect recognition. Of course, if the effective light is scene dependent then optimal color constancy processing is also scene dependent and so, is equally a color invariant normalization.
Paintings convey the composition and characteristics of artists; therefore, it is possible to feel the intended style of painting and emotion of each artist through their paintings. In general, basic elements that constitute traditional paintings are color, texture, and composition (formative elements constituting the paintings are color and shape); however, color is the most crucial element expressing the emotion of a painting. In particular, traditional colors manifest the color containing historicity of the era, so the color shown in painting images is considered a representative color of the culture to which the painting belongs. This study constructed a color emotional system by analyzing colors and rearranged color emotion adjectives based on color combination techniques and clustering algorithm proposed by Kobayashi as well as I.R.I HUE & TONE 120 System. Based on the embodied color emotion system, this study confirmed classified emotions of images by extracting and classifying emotions from traditional Korean painted images, focusing on traditional painted images of the late Joseon Dynasty. Moreover, it was possible to verify the cultural traits of the era through the classified emotion images.
This paper deals with the hadronization process of quark system. A phenomenological potential is introduced to describe the interaction between a quark pair. The potential depends on the color charge of those quarks and their relative distances. Those quarks move according to classical equations of motion. Due to the color interaction, coloring quarks are separated to form color neutral clusters which are supposed to be the hadrons.
Analysis of eye movement due to different visual stimuli always has been one of the major research areas in vision science. An important category of works belongs to decoding of eye movement due to variations of color of visual stimuli. In this research, for the first time, we employ fractal analysis in order to investigate the variations of complex structure of eye movement time series in response to variations of color of visual stimuli. For this purpose, we applied two different images in three different colors (red, green, blue) to subjects. The result of our analysis showed that eye movement has the greatest complexity in case of green visual stimulus. On the other hand, the lowest complexity of eye movement was observed in case of red stimulus. In addition, the results showed that except for red visual stimulus, applying the visual stimulus with greater complexity causes the lower complexity in eye movements. The employed methodology in this research can be further applied to analyze the influence of other variations of visual stimuli on human eye movement.
This paper presents an intelligent method of retrieving images with Chinese captions from an image database. We combine color, shape and spatial features of the image to index and measure the similarity of images. As a technical contribution, a Seed-Filling like algorithm that could extract the shape and spatial relationship feature of images is proposed. Due to the difficulty of determining how far objects separate, we use qualitative spatial relations to analyze object similarities. Also, the system is incorporated with a visual interface and a set of tools, which allows the users to express the query by specifying or sketching the images conveniently. Besides, our feedback learning mechanism enhances the precision of retrieval. Our experience shows that the system is able to retrieve image information efficiently by the proposed approaches.
This paper presents a graph-based ordering scheme of color vectors. A complete graph is defined over a filter window and its structure is analyzed to construct an ordering of color vectors. This graph-based ordering is constructed by finding a Hamiltonian path across the color vectors of a filter window by a two-step algorithm. The first step extracts, by decimating a minimum spanning tree, the extreme values of the color set. These extreme values are considered as the infimum and the supremum of the set of color vectors. The second step builds an ordering by constructing a Hamiltonian path among the vectors of color vectors, starting from the infimum and ending at the supremum. The properties of the proposed graph-based ordering of vectors are detailed. Several experiments are conducted to assess its filtering abilities for morphological and median filtering.
In this paper, we present two steps in the process of automatic annotation in archeological images. These steps are feature extraction and feature selection. We focus our research on archeological images which are very much studied in our days. It presents the most important steps in the process of automatic annotation in an image. Feature extraction techniques are applied to get the feature that will be used in classifying and recognizing the images. Also, the selection of characteristics reduces the number of unattractive characteristics. However, we reviewed various images of feature extraction techniques to analyze the archaeological images. Each feature represents one or more feature descriptors in the archeological images. We focus on the descriptor shape of the archaeological objects extraction in the images using contour method-based shape recognition of the monuments. So, the feature selection stage serves to acquire the most interesting characteristics to improve the accuracy of the classification. In the feature selection section, we present a comparative study between feature selection techniques. Then we give our proposal of application of methods of selection of the characteristics of the archaeological images. Finally, we calculate the performance of two steps already mentioned: the extraction of characteristics and the selection of characteristics.
Trichromacy is the representation of a light spectrum by three scalar coordinates. Such representation is universally implemented by the human visual system and by RGB (Red Green Blue) cameras. We propose here an informational model for trichromacy. Based on a statistical analysis of the dynamics of individual photons, the model demonstrates a possibility for describing trichromacy as an information channel, for which the input–output mutual information can be computed to serve as a measure of performance. The capabilities and significance of the informational model are illustrated and motivated in various situations. The model especially enables an assessment of the influence of the spectral sensitivities of the three types of photodetectors realizing the trichromatic representation. It provides a criterion to optimize possibly adjustable parameters of the spectral sensitivities such as their center wavelength, spectral width or magnitude. The model shows, for instance, the usefulness of some overlap with smooth graded spectral sensitivities, as observed for instance in the human retina. The approach also, starting from hyperspectral images with high spectral resolution measured in the laboratory, can be used to devise low-cost trichromatic imaging systems optimized for observation of specific spectral signatures. This is illustrated with an example from plant science, and demonstrates a potential of application especially to life sciences. The approach particularizes connections between physics, biophysics and information theory.
The recently proposed color based tracking systems are unable to properly adapt the ellipse that represents an object to be tracked. This most likely leads to inaccurate descriptions of the object in the later application. This paper presents a Lagrangian based method in order to discover a regularizing component for the covariance matrix. Technically, we intend to reduce the residuals between the estimated probability distribution and the expected one. We argue that, by doing this, the shape of the ellipse can be properly adapted in the tracking stage. Experimental results show that the proposed method has favorable performance in shape adaption and object localization.
Nutrition is an essential component in agriculture worldwide to assure high and consistent crop yields. The leaves frequently present signs of nutritional deficiencies in rice crops. A nutritional deficiency in the rice plant can also be diagnosed based on the leaf color and form. Image categorization is an effective and rapid method for analyzing such conditions. However, despite significant success in image classification, Ensemble Learning (EL) has remained elusive in paddy nutrition analysis. Ensemble learning is a technique for deliberately constructing and combining numerous classifier models to tackle a specific computational issue. In this work, we investigate the preciseness of several uncertain deep learning algorithms to detect nutritional deficits in rice leaves. Through soil and agricultural studies, around 2000 images of rice plant leaves were collected encompassing complete nutritional and about five divisions of nutrient deficiencies. The image proportion for learning via training, validation via evaluation, and testing phase were split into 4: 2: 2. For this, an EL method is chosen for the diagnosis and classification of nutritional deficits. Here, EL procedures are considered as a hybrid classification model that integrates CapsNET (Capsule network) and GCN (Graph Convolutional Neural) networks to evaluate the classification. The hybrid classification effectiveness was verified through color and lesion features which were compared with standard machine learning techniques. This research shows that EL strategies can effectively detect nutritional deficits in paddy. Furthermore, the suggested hybrid classification model achieved a better accuracy rate, along with sensitivity and specificity rates of 97.13%, 97.22%, and 96.47% correspondingly.
The perception of an image in one's mind filled with vibrant colors comes so naturally for humans that the complexity of the human visual system is often overlooked. This paper follows the journey of light through the human eye and its interpretation in the brain. The wave properties of light are explored as light propagates through the eye while the particle properties of light are examined when light is absorbed by the retina. The computations involved in the perception of color are discussed as well as birefringent properties of the eye.
The research work has focused on detection and prediction of melanoma which is done by subjecting to features extraction, where the features of an image consisting of melanoma regions are detected by analysis and this analysis is done by considering the features like color and texture-based features learning strategy. These features are extracted by combining color and texture-based features extraction with deep convolutional features representation learning strategy. The colors of images are extracted by representing the colors of different channels into red, green and blue channel information. The combination of texture features extraction with color-based features extraction in addition to Alex net features extraction learning has made the system more robust and efficient toward the segmentation and classification of images. Further, the erected method involves convoluting the features of extracted information with color and texture-based method which has led our system to full convolution neural networks with images features extraction. The melanoma is detected and segmented with watershed segmentation, these segmented features are subjected to the proposed features extraction method, where the features are extracted by combining the methods of texture with color-based information. These colors are made available to the proposed method by analyzing the regions of melanoma images. The erected method does the task of features extraction by Weber law descriptors in combination with red, green, blue channels information extracted from features representation learning. The proposed method has yielded an accuracy of 94.12% of segmentation accuracy and a classification accuracy of 94.32% with respect to various other classification techniques.
The eminent Chinese artist LaoZhu has created a homogeneous set of abstract pictures that are referred to as the “third abstraction.” By definition, these pictures are meant to be representations of the artist’s personal involvement and as such to create an internal point of view in the observer on an implicit level of processing. Aiming at investigating whether the artist’s choice of a specific color is experienced in a specific way by the recipient, we assessed both explicit and implicit (i.e., neuro-cognitive) correlates in naive viewers of LaoZhu’s pieces. The behavioral results reveal a preference of the original red paintings over color-changed counterparts in green or black. Paradoxically and inconsistent with predictions, we found higher levels of neural activation in several brain regions (predominantly in the frontal and parietal cortices) for the color-changed compared to the original red conditions. These observations add empirically to the complementarity of early visual pathways and higher-order cognition as well as of explicit and implicit information processing during aesthetic appreciation. We discuss our findings in light of processing effort and top-down control of the color-changed paintings. With regard to the third abstraction as defined by LaoZhu, in particular to the distinction between an external and internal point of view when viewing abstract art, our results contribute to an understanding of “abstraction and empathy” as a fundamental part of aesthetic appreciation.
Colors are critical for understanding the emotional aspect of the human artistic mind, such as that found in painting a landscape, still life, or portrait. First, we report how single colors are memorized in the brain; second, how pairs of colors harmonize in the dissociated brain under the influence of the emotional brain; third, we see how colored paintings are appreciated as beautiful or ugly in the dissociated brain areas led by the intrinsic reward system in the human brain. The orbitofrontal brain is probably one of the vital brain areas that brings us a value-based reward system that makes a unique contribution to emotional neuroaesthetics.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.