Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Nutrition is an essential component in agriculture worldwide to assure high and consistent crop yields. The leaves frequently present signs of nutritional deficiencies in rice crops. A nutritional deficiency in the rice plant can also be diagnosed based on the leaf color and form. Image categorization is an effective and rapid method for analyzing such conditions. However, despite significant success in image classification, Ensemble Learning (EL) has remained elusive in paddy nutrition analysis. Ensemble learning is a technique for deliberately constructing and combining numerous classifier models to tackle a specific computational issue. In this work, we investigate the preciseness of several uncertain deep learning algorithms to detect nutritional deficits in rice leaves. Through soil and agricultural studies, around 2000 images of rice plant leaves were collected encompassing complete nutritional and about five divisions of nutrient deficiencies. The image proportion for learning via training, validation via evaluation, and testing phase were split into 4: 2: 2. For this, an EL method is chosen for the diagnosis and classification of nutritional deficits. Here, EL procedures are considered as a hybrid classification model that integrates CapsNET (Capsule network) and GCN (Graph Convolutional Neural) networks to evaluate the classification. The hybrid classification effectiveness was verified through color and lesion features which were compared with standard machine learning techniques. This research shows that EL strategies can effectively detect nutritional deficits in paddy. Furthermore, the suggested hybrid classification model achieved a better accuracy rate, along with sensitivity and specificity rates of 97.13%, 97.22%, and 96.47% correspondingly.
The use of color in computer vision has received growing attention. This chapter introduces the basic principles underlying the physics and perception of color and reviews the state-of-the-art in color vision algorithms. Parts of this chapter have been condensed from [58] while new material has been included which provides a critical review of recent work. In particular, research in the areas of color constancy and color segmentation is reviewed in detail.
The first section reviews physical models for color image formation as well as models for human color perception. Reflection models characterize the relationship between a surface, the illumination environment, and the resulting color image. Physically motivated linear models are used to approximate functions of wavelength using a small number of parameters. Reflection models and linear models are introduced in Section 1 and play an important role in several of the color constancy and color segmentation algorithms presented in Sections 2 and 3. For completeness, we also present a concise summary of the trichromatic theory which models human color perception. A discussion is given of color matching experiments and the CIE color representation system. These models are important for a wide range of applications including the consistent representation of color on different devices. Section 1 concludes with a description of the most widely used color spaces and their properties.
The second section considers progress on computational approaches to color constancy. Human vision exhibits color constancy as the ability to perceive stable surface colors for a fixed object under a wide range of illumination conditions and scene configurations. A similar ability is required if computer vision systems are to recognize objects in uncontrolled environments. We begin by reviewing the properties and limitations of the early retinex approach to color constancy. We describe in detail the families of linear model algorithms and highlight algorithms which followed. Section 2 concludes with a subsection on recent indexing methods which integrate color constancy with the higher level recognition process.
Section 3 addresses the use of color for image segmentation and stresses the role of image models. We start by presenting classical statistical approaches to segmentation which have been generalized to include color. The more recent emphasis on the use of physical models for segmentation has led to new classes of algorithms which enable the accurate segmentation of effects such as shadows, highlights, shading, and interreflection. Such effects are often a source of error for algorithms based on classical statistical models. Finally, we describe a color texture model which has been used successfully as the basis of an algorithm for segmenting images of natural outdoor scenes.
The use of color in computer vision has received growing attention. This chapter gives the state-of-the-art in this subfield, and tries to answer the questions: What is color? Which are the adequate representations? How is it computed? What can be done using it?
The first section introduces some basic tools and models that can be used to describe the color imaging process. We first summarize the classical photometric and colorimetric notions: light measurement, intensity equation, color signal, color perception, trichromatic theory. The growing interest in color during the last few years comes from two new classes of models of reflection, physical models and linear models, which lead to highlight algorithms as well as color constancy algorithms. We present these models in detail and discuss some of their limitations.
The second section deals with the problem of color constancy. The term “color constancy” refers to the fact that the colors perceived by humans in real scenes are relatively stable under large variations of illumination and of material composition of scenes. From a computational standpoint, achieving color constancy is an underdetermined problem: computing the spectral reflectance from the sensor measurements. We compare three classes of color constancy algorithms, based on lightness computation, linear models, and physical models, respectively. For each class, the principle is explained, and one or two significant algorithms are given. A comparative study serves to introduce the others.
The third section is concerned with the use of color in universal, i.e. mainly low-level, vision tasks. We emphasize the distinction between tasks that have been extensively studied in monochromatic images and for which the contribution of color is just a quantitative generalization, and tasks where color has a qualitative role. In the first case, additional image features are obtained, and have to be represented and used efficiently. In the latter case, it is hoped that color can help recover intrinsic physical properties of scenes. We study successively three important themes in computer vision: edges, segmentation, matching. For each of them, we present the two frameworks for the use of color.