Please login to be able to save your searches and receive alerts for new content matching your search criteria.
To provide an effective way for product modeling design, a product modeling design method based on the fusion of texture and shape features and computer technology is proposed. Based on the product modeling design drawing. By constructing the image texture gray-level co-occurrence matrix and extracting the image texture primitives, the energy, inertia, entropy, and evenness statistics of the texture are obtained, which serve to describe the image texture characteristics. The OHTA color model is employed to segment the shape and background of the product design drawing, while the Fourier descriptor is utilized to obtain the shape features. Based on the texture and shape features required for product modeling design, the image required for product modeling design is retrieved from the image database by calculating the similarity between the texture and shape features and the image feature vector in the image database. Using the retrieved image as input, the framework of the product modeling design virtual environment is first established, and subsequently, the product modeling design is implemented within this virtual environment using Rhino 3D software. The experiment shows that the texture and shape features extracted by this method are more accurate, and can effectively retrieve the image needed for product modeling design from the image database according to the texture and shape features. Based on this image, the product modeling design is realized, and the application effect is relatively remarkable.
Content-Based Image Retrieval (CBIR) is a broad research field in the current digital world. This paper focuses on content-based image retrieval based on visual properties, consisting of high-level semantic information. The variation between low-level and high-level features is identified as a semantic gap. The semantic gap is the biggest problem in CBIR. The visual characteristics are extracted from low-level features such as color, texture and shape. The low-level feature increases CBIRs performance level. The paper mainly focuses on an image retrieval system called combined color (TriCLR) (RGB, YCbCr, and L∗a∗b∗) with the histogram of texture features in LBP (HistLBP), which, is known as a hybrid of three colors (TriCLR) with Histogram of LBP (TriCLR and HistLBP). The study also discusses the hybrid method in light of low-level features. Finally, the hybrid approach uses the (TriCLR and HistLBP) algorithm, which provides a new solution to the CBIR system that is better than the existing methods.
The usage of biomedical imaging in the diagnosis of dementia is increasingly widespread. A number of works explore the possibilities of computational techniques and algorithms in what is called computed aided diagnosis. Our work presents an automatic parametrization of the brain structure by means of a path generation algorithm based on hidden Markov models (HMMs). The path is traced using information of intensity and spatial orientation in each node, adapting to the structure of the brain. Each path is itself a useful way to characterize the distribution of the tissue inside the magnetic resonance imaging (MRI) image by, for example, extracting the intensity levels at each node or generating statistical information of the tissue distribution. Additionally, a further processing consisting of a modification of the grey level co-occurrence matrix (GLCM) can be used to characterize the textural changes that occur throughout the path, yielding more meaningful values that could be associated to Alzheimer’s disease (AD), as well as providing a significant feature reduction. This methodology achieves moderate performance, up to 80.3% of accuracy using a single path in differential diagnosis involving Alzheimer-affected subjects versus controls belonging to the Alzheimer’s disease neuroimaging initiative (ADNI).
Based on the process of generating HDR images from LDR image sequences with different light exposures in the same scene, a new fitting method of camera response curves is proposed to solve the problem that the boundary of the fitting algorithm of camera response curves will be blurred and it is difficult to determine and verify the accuracy of the fitting curves. The optimal response curve is fitted by increasing LDR images step by step through considering the pixel value and texture characteristics. In order to validate the fitting effect of curves, we compare the photographed images and the real images in different time intervals on the basis of HDR images and response curves. We use RGB and gray image experiments to compare the current mainstream algorithms and the accuracy of our proposed algorithm can reach 96%, which has robustness.
In the present times, artificial-intelligence based techniques are considered as one of the prominent ways to classify images which can be conveniently leveraged in the real-world scenarios. This technology can be extremely beneficial to the lepidopterists, to assist them in classification of the diverse species of Rhopalocera, commonly called as butterflies. In this article, image classification is performed on a dataset of various butterfly species, facilitated via the feature extraction process of the Convolutional Neural Network (CNN) along with leveraging the additional features calculated independently to train the model. The classification models deployed for this purpose predominantly include K-Nearest Neighbors (KNN), Random Forest and Support Vector Machine (SVM). However, each of these methods tend to focus on one specific class of features. Therefore, an ensemble of multiple classes of features used for classification of images is implemented. This research paper discusses the results achieved from the classification performed on basis of two different classes of features i.e., structure and texture. The amalgamation of the two specified classes of features forms a combined data set, which has further been used to train the Growing Convolutional Neural Network (GCNN), resulting in higher accuracy of the classification model. The experiment performed resulted in promising outcomes with TP rate, FP rate, Precision, recall and F-measure values as 0.9690, 0.0034, 0.9889, 0.9692 and 0.9686 respectively. Furthermore, an accuracy of 96.98% was observed by the proposed methodology.
The identification of script in a document page image is the first step for an OCR system processing multi-script documents. In this multilingual/multiscript world, document processing systems relying on the OCR that need human involvement to select the appropriate OCR package is definitely undesirable and inefficient. The development of robust and efficient methods for automatic script identification of a document is a subject of major importance for automatic document processing in a multilingual/multiscript environment. Thus, the basic objective is to come up with some intuitive methods having straightforward implementation without compromising with efficiency. The aim of this work is to evaluate state-of-the-art feature extraction and classification techniques in the field of automatic script identification of printed and handwritten documents and to propose the best combination for the same.
Context: Due to the change and advancement in technology, day by day the internet service usages are also increasing. Smartphones have become the necessity for every person these days. It is used to perform all basic daily activities such as calling, SMS, banking, gaming, entertainment, education, etc. Therefore, malware authors are developing new variants of malwares or malicious applications especially for monetary benefits.
Objective: Objective of this research paper is to develop a technique that can be used to detect malwares or malicious applications on the android devices that will work for all types of packed or encrypted malicious applications, which usually evade decompiling tools.
Method: In the proposed approach, visualization method is used for the detection of malware. In the first phase, application files are converted into images and then in second phase, texture feature of images are extracted using Grey Level Co-occurrence Matrix (GLCM). In the last phase, machine learning classification algorithms are used to classify the malicious and benign applications.
Results: The proposed approach is run on different datasets collected from various repositories. Different efficiency parameters are calculated and the proposed approach is compared with the existing approaches.
Conclusion: We have proposed a static technique for efficient detection of malwares. The proposed technique performs better than the existing technique.
During the postharvesting of horticulture, the grading of the fruit is significant because it determines the satisfaction and preference of the consumers while reaching the market. The fruit grading using the physical classification is an expensive task and inaccurate classification may occur due to human errors. Thus, there is a need for automatic fruit grading using the non-destructive process. This research proposes an efficient fruit grading technique using the partial least squares-discriminant analysis (PLS-DA technique) based on the texture features. Here, the features such as local binary pattern (LBP), local directional pattern (LDP), local optimal oriented pattern (LOOP), local gradient pattern (LGP), and Local Ternary Pattern (LTP) are extracted for the classification of the fruit. The feature extraction based on these texture features is employed after pre-processing the multi-spectral input image. From, the extracted features, the classification of fruit such as apple, banana, pomegranate, and mango are employed. Then, the parameters such as firmness, soluble solids concentration (SSC), and titratable acidity (TAC) are evaluated for the fruit quality grading using the PLS-DA technique. The proposed method is evaluated in terms of accuracy, error, R2, residual predictive deviation (RPD), sensitivity, and specificity and obtained the values of 95.67%, 4.33, 92.97%, 0.05, 95.66%, and 95.17% respectively.
Surgical excision is an effective treatment for oral squamous cell carcinoma (OSCC), but exact intraoperative differentiation OSCC from the normal tissue is the first premise. As a noninvasive imaging technique, optical coherence tomography (OCT) has the nearly same resolution as the histopathological examination, whose images contain rich information to assist surgeons to make clinical decisions. We extracted kinds of texture features from OCT images obtained by a home-made swept-source OCT system in this paper, and studied the identification of OSCC based on different combinations of texture features and machine learning classifiers. It was demonstrated that different combinations had different accuracies, among which the combination of texture features, gray level co-occurrence matrix (GLCM), Laws’ texture measures (LM), and center symmetric auto-correlation (CSAC), and SVM as the classifier, had the optimal comprehensive identification effect, whose accuracy was 94.1%. It was proven that it is feasible to distinguish OSCC based on texture features in OCT images, and it has great potential in helping surgeons make rapid and accurate decisions in oral clinical practice.
Content-based image retrieval systems allow the user to interactively search image databases looking for those images which are similar to a specified query image. Similarity between images is then assessed by computing similarity between feature vectors. These features are represented in the vector form, and often are combined together. This paper explores novel wavelet approach to image retrieval based on a combination of color, texture and wavelet moments features. Color moments are used as color features which increase the precision of the retrieval process. Texture features are described by the mean, variance and energy of wavelet decomposition coefficients in some subbands. For describing shapes we propose a method that combines feature vector as a set of wavelet moment invariants.