Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The assessment of the skin surface is of a great importance in the dermocosmetic field to evaluate the response of individuals to medical or cosmetic treatments. In vivo quantitative measurements of changes in skin topographic structures provide a valuable tool, thanks to noninvasive devices. However, the high cost of the systems commonly employed is limiting, in practice, the widespread use of these devices for a routine-based approach. In this work we resume the research activity carried out to develop a compact low-cost system for skin surface assessment based on capacitive image analysis. The accuracy of the capacitive measurements has been assessed by implementing an image fusion algorithm to enable a comparison between capacitive images and the ones obtained using high-cost profilometry, the most accurate method in the field. In particular, very encouraging results have been achieved in the measurement of the wrinkles' width. On the other hand, experiments show all the native design limitations of the capacitive device, primarily conceived to work with fingerprints, to measure the wrinkles' depth, which point toward a specific re-designing of the capacitive device.
This paper describes an efficient biometric writer identification system that can be used in a resource contained embedded environment. Writer identification (personal identification by general handwriting) is a relatively new area of handwriting research when compared to the handwriting recognition or signature verification areas. This work is aimed at exploring only small-scale handwriting samples, especially single handwritten words. A database of such samples has been collected dynamically using a digital writing tablet and a force sensitive pen. The dynamic approach adapted in this area is largely new, and only a few papers exist on the subject. Mainly dynamic features of handwriting connected with the writing process itself are considered, although some static features are also used. It is also shown how simple parameters can be used in embedded writer identification devices, and whether they can perform adequately in this type of application. A new feature selection algorithm based on likeness coefficients is proposed. The classifiers used are Minimum-Distance Classifier, Bayes Classifier, and finally their serial combination. The efficiency of this approach and its high performance together show the reasons for employing it in embedded biometric identification devices.
Fractal image compression is one of the most promising techniques for image compression due to advantages such as resolution independence and fast decompression. It exploits the fact that natural scenes present self-similarity to remove redundancy and obtain high compression rates with smaller quality degradation compared to traditional compression methods. The main drawback of fractal compression is its computationally intensive encoding process, due to the need for searching regions with high similarity in the image. Several approaches have been developed to reduce the computational cost to locate similar regions. In this work, we propose a method based on robust feature descriptors to speed up the encoding time. The use of robust features provides more discriminative and representative information for regions of the image. When the regions are better represented, the search for similar parts of the image can be reduced to focus only on the most likely matching candidates, which leads to reduction on the computational time. Our experimental results show that the use of robust feature descriptors reduces the encoding time while keeping high compression rates and reconstruction quality.
Genetics is the clinical review of congenital mutation, where the principal advantage of analyzing genetic mutation of humans is the exploration, analysis, interpretation and description of the genetic transmitted and inherited effect of several diseases such as cancer, diabetes and heart diseases. Cancer is the most troublesome and disordered affliction as the proportion of cancer sufferers is growing massively. Identification and discrimination of the mutations that impart to the enlargement of tumor from the unbiased mutations is difficult, as majority tumors of cancer are able to exercise genetic mutations. The genetic mutations are systematized and categorized to sort the cancer by way of medical observations and considering clinical studies. At the present time, genetic mutations are being annotated and these interpretations are being accomplished either manually or using the existing primary algorithms. Evaluation and classification of each and every individual genetic mutation was basically predicated on evidence from documented content built on medical literature. Consequently, as a means to build genetic mutations, basically, depending on the clinical evidences persists a challenging task. There exist various algorithms such as one hot encoding technique is used to derive features from genes and their variations, TF-IDF is used to extract features from the clinical text data. In order to increase the accuracy of the classification, machine learning algorithms such as support vector machine, logistic regression, Naive Bayes, etc., are experimented. A stacking model classifier has been developed to increase the accuracy. The proposed stacking model classifier has obtained the log loss 0.8436 and 0.8572 for cross-validation data set and test data set, respectively. By the experimentation, it has been proved that the proposed stacking model classifier outperforms the existing algorithms in terms of log loss. Basically, minimum log loss refers to the efficient model. Here the log loss has been reduced to less than 1 by using the proposed stacking model classifier. The performance of these algorithms can be gauged on the basis of the various measures like multi-class log loss.
Aiming at the shortcomings of the modeling analysis of traditional Feature-Oriented Analysis Approach under service oriented architecture SOA, and providing more reusability and flexibility to the development of SOA system, this paper makes an improvement on Feature-Oriented Analysis Approach. It introduces the concept of service feature and improves the refinement and interaction description of feature models. On the basis of this, it proposes a method of domain analysis in SOA. In addition, in view of the fact that web services act as a technology available to implement SOA, it presents a method to transform feature model into interface model and composite model of web services. Finally, this method's application in ERP system project in publishing is verified as an example to show that it is feasible to improve software development efficiency.
In this paper, a new technique is presented that reduces the dimensionality of large data set without disturbing its topology and also maintains high accuracy of classification. For this Genetic Algorithm is used with Sammon error as the fitness function. The proposed technique is tested on four real and one synthetic data set. High value of correlation coefficient between proximity matrix of original data set and the corresponding data set with reduced number of features ensures that topology of the data set is preserved even with reduced number of features. Comparative study of the clustering results obtained with reduced and original data set justifies the capability of the proposed technique to give good classification accuracy even with reduced number of features.
Most methods of fuzzy rule based system identification either ignore feature analysis or do it in a separate phase. In this chapter we propose a novel neuro-fuzzy system that can simultaneously do feature analysis and system identification in an integrated manner. It is a five-layered feed-forward network for realizing a fuzzy rule based system. The second layer of the net is the most important one, which along with fuzzification of the input also learns a modulator function for each input feature. This enables online selection of important features by the network. The system is so designed that learning maintains the non-negative characteristic of certainty factors of rules. The proposed method is tested on both synthetic and real data sets and the performance is found to be quite satisfactory.
Radiative transfer algorithms in combination with empirical formulae have been the most popular approach to the analysis of ocean primary productivity from remotely sensed images of the Earth. These methods fully rely on the limited amounts of ground truth data available and assumptions regarding how sensor, Earth's surface and atmospheric properties influence the radiation captured in different ranges of the electromagnetic spectrum. As these assumptions are restraining, multi-spectral and fusion techniques based on the application of unsupervised neural networks can contribute to the improvement in ocean colour studies and enable analysis of complex water types. This chapter presents the application of a hierarchy of self-organizing feature maps to clustering and differentiation of oceanic waters. The practical studies are performed on imagery captured all over the Pacific Ocean by the Ocean Colour and Temperature Scanner on board the Japanese satellite ADEOS.
A feature analysis method was developed for the recognition of hand-generated gestures (or markings). Gesture recognition differs from handwriting recognition because gestures are often generated in different proportions, rotations, and sometimes in mirror images. The features are based on direction changes and they are applied successfully to gestural variations. This recognition system is a part of a keyboardless direct manipulation interface to a spreadsheet application…