Ni–Cr–MoS2-based composite’s microstructure and wear behavior were examined in relation to compacting pressure and sintering temperature. The tribological performance of Ni-Cr–MoS2 composite against En32 martial is examined in this research in relation to the nodule count, particle count, and average area depending on the compacting pressure and sintering temperature. The green pallets were cylindrical in shape and had a diameter of 12.5mm × 30mm. They were pressed at room temperature (28∘C) with axial pressures of 220MPa, 275MPa, and 330MPa, then sintered in two batches at different sintering temperatures of 900∘C and 1000∘C. By using Image analysis, the amount of nodules, their size, particle counts, particle sizes, and lubricant flakes of developed composite are investigated. The typical nodule size ranges from 46 μm to 60 μm, and the average nodule percentage is between 67 and 79 present in Ni–Cr–MoS2-based composite. Particle counts ranged at present from 29 to 516 depending on the sintering temperature and compacting pressure. Up to 1.1472 μm, lubricant flake lengths were nearly the same for composite for the different compacting pressure and sintering. With varying compacting pressures and sintering temperatures, the coefficient of friction and wear rate varied for the developed Ni–Cr–MoS2 composite. The impact of the nodule, nodule counts, particle and particle size is also present on the developed composites for friction and wear. Minimum wear rate observed for composite sintered was under 1000∘C temperature, and maximum wear rate was 7.55E−8 mm3/Nm for low compacting pressure and 900∘C sintering temperature.
Random context picture grammars (rcpgs) are a method of syntactic picture generation. The productions of such a grammar are context-free, but their application is regulated—permitted or forbidden—by context randomly distributed in the developing picture. In previous work we studied three natural subclasses of rcpgs, namely context-free picture grammars (cfpgs), random permitting context picture grammars (rPcpgs) and random forbidding context picture grammars (rFcpgs). We now introduce the notion of table-driven context-free picture grammars (Tcfpgs), and compare them to the above subclasses. We show that Tcfpgs are more powerful than cfpgs, and that they can generate a picture set (more commonly known as a gallery) that rPcpgs cannot. We present two conditions necessary for a gallery to be generated by a Tcfpg, and use these results to find two galleries that cannot be made by any Tcfpg. The second of these galleries can be generated by an rFcpg. Since it is easily shown that any gallery generated by a Tcfpg can be generated by an rFcpg, we can conclude that Tcfpgs are strictly weaker than rFcpgs.
To use crystallography for the determination of the three-dimensional structures of proteins, protein crystals need to be grown. Automated imaging systems are increasingly being used to monitor these crystallization experiments. These present problems of accessibility to the data, repeatability of any image analysis performed and the amount of storage required. Various image formats and techniques can be combined to provide effective solutions to high volume processing problems such as these, however lack of widespread support for the most effective algorithms, such as JPeg2000 which yielded a 64% improvement in file size over the bitmap, currently inhibits the immediate take up of this approach.
A novel depth-from-motion vision model based on leaky integrate-and-fire (I&F) neurons incorporates the implications of recent neurophysiological findings into an algorithm for object discovery and depth analysis. Pulse-coupled I&F neurons capture the edges in an optical flow field and the associated time of travel of those edges is encoded as the neuron parameters, mainly the time constant of the membrane potential and synaptic weight. Correlations between spikes and their timing thus code depth in the visual field. Neurons have multiple output synapses connecting to neighbouring neurons with an initial Gaussian weight distribution. A temporally asymmetric learning rule is used to adapt the synaptic weights online, during which competitive behaviour emerges between the different input synapses of a neuron. It is shown that the competition mechanism can further improve the model performance. After training, the weights of synapses sourced from a neuron do not display a Gaussian distribution, having adapted to encode features of the scenes to which they have been exposed.
The skin is the largest (and the most exposed) organ of the body both in terms of surface area and weight. Its care is of great importance for both aesthetics and health issues. Often, the skin appearance gives us information about the skin health status as well as hints at the biological age. Therefore, the skin surface characterization is of great significance for dermatologists as well as for cosmetic scientists in order to evaluate the effectiveness of medical or cosmetic treatments. So far, no in vivo measurements regarding skin topography characterization could be achieved routinely to evaluate skin aging. This work describes how a portable capacitive device, normally used for fingerprint acquisition, can be utilized to achieve measures of skin aging routinely. The capacitive images give a high resolution (50 μm) representation of skin topography, in terms of wrinkles and cells. In this work, we have addressed the latter: through image segmentation techniques, cells have been localized and identified and a feature related to their area distribution has been generated. Accurate experiments accomplished in vivo show how the feature we conceived is linearly related to skin aging. Besides, since this finding has been achieved using a low cost portable device, this could boost research in this field as well as open doors to an application based on an embedded system.
Quantitative evaluation of the changes in skin topographic structures are of great importance in the dermocosmetic field to assess subjects response to medical or cosmetic treatments. Although many devices and methods are known to measure these changes, they are not suitable for a routine approach and most of them are invasive. Moreover, it has always been difficult to give a measure of the skin health status as well as of the human aging process by simply analyzing the skin surface appearance. This work describes how a portable capacitive device could be utilized to achieve measurements of skin ageing in vivo and routinely. The capacitive images give a high resolution representation of the skin micro-relief, both in terms of skin surface tissue and wrinkles. In a previous work we dealt with the former; here we have addressed the latter. The algorithm we have developed allowed us to extract two original features from wrinkles: the first is based on photometric properties while the second has been achieved through the multiresolution analysis of the wavelet transform. Accurate experiments accomplished on 87 subjects show how the features we conceived are related to skin ageing.
The assessment of the skin surface is of a great importance in the dermocosmetic field to evaluate the response of individuals to medical or cosmetic treatments. In vivo quantitative measurements of changes in skin topographic structures provide a valuable tool, thanks to noninvasive devices. However, the high cost of the systems commonly employed is limiting, in practice, the widespread use of these devices for a routine-based approach. In this work we resume the research activity carried out to develop a compact low-cost system for skin surface assessment based on capacitive image analysis. The accuracy of the capacitive measurements has been assessed by implementing an image fusion algorithm to enable a comparison between capacitive images and the ones obtained using high-cost profilometry, the most accurate method in the field. In particular, very encouraging results have been achieved in the measurement of the wrinkles' width. On the other hand, experiments show all the native design limitations of the capacitive device, primarily conceived to work with fingerprints, to measure the wrinkles' depth, which point toward a specific re-designing of the capacitive device.
Natural fibre reinforced thermoplastic composites find a wide array of applications in the automobile, building and construction industries. These composites are mostly produced by injection moulding or extrusion through properly designed dies. During these production processes, the shear forces exerted by the screw or ram leads to the degradation of the natural fibres. A screwless extruder that minimises fibre degradation and employs a reliable and low technology process has already been developed. However, the fibre degradation caused by the screwless extruder has not been compared with that of the conventional extruders. So, this study is focused on the influence of extrusion processes on the degradation of natural fibres in thermoplastic composites. Sisal fibres of 10 mm length were extruded with polypropylene, to furnish extrudates with a fibre mass fraction of 25%, using conventional single screw and screwless extruders. Polypropylene in the extrudates was dissolved in Xylene in a Sohxlet process; the fibres that were extracted were analysed for length variations. While fibre degradation in the form of fibre length variation is similar in both cases, this can be minimised in screwless extrusion by extending the gap between the front face of the cone and the orifice plate.
Mechanically polished surface of Zr55Al10Ni5Cu30 metallic glass was indented with a rigid ball 0.5 mm in diameter and its corresponding load-depth curve was recorded automatically. Although a stress-strain relationship beneath the indenter can be analyzed from the raw indentation curve, the current analysis developed for crystalline solids can mislead erroneous properties because it does not consider significant material pile-ups in amorphous metallic glasses. Thus, we proposed a novel indent image processing technique for characterizing the contact and flow properties in the metallic glasses; the contact area was measured by differentiating a three-dimensional indent morphology digitized by a surface profiler and a surface-stretching strain was newly defined in order to estimate the flow properties. Finally, the work-hardening index estimated was about 0.05, comparable with the typical value measured from uniaxial compression in the Zr-based metallic glass.
BaTiO3 (BTO) is considered the most commonly used ceramic material in multilayer ceramic capacitors due to its desirable dielectric properties. Considering that the miniaturization of electronic devices represents an expanding field of research, modification of BTO has been performed to increase dielectric constant and DC bias characteristic/sensitivity. This research presents the effect of N2 and air atmospheres on morphological and dielectric properties of BTO nanoparticles modified with organometallic salt at sintering temperatures of 1200∘C, 1250∘C, 1300∘C, and 1350∘C. Measured dielectric constants were up to 35,000, with achieved very high values in both atmospheres. Field emission scanning electron microscopy (FESEM) was used for morphological characterization, revealing a porous structure in all the samples. The software image analysis of FESEM images showed a connection between particle and pore size distribution, as well as porosity. Based on the data from the image analysis, the prediction of dielectric properties in relation to morphology indicated that yttrium-based organometallic salt reduced oxygen vacancy generation in N2 atmosphere. DC bias sensitivity measurements showed that samples with higher dielectric constant had more pronounced sensitivity to voltage change, but most of the samples were stable up to 100 V, making our modified BTO a promising candidate for capacitors.
Examinations of the left ventricle (LV), which is the systemic ventricle and as such of paramount importance for the function of the heart, are commonly employed in cardiology. Numerous models have been developed that allow for LV parametric representation. Thanks to that, the left ventricle is better suited for all types of modeling endeavors and the correctness of the results may be relatively easily verified.
The authors present a new method for an automatic detection and evaluation of the left ventricle, which is seen as an echocardiographic image in the four-chamber projection. The method is based on computerized image analysis, and in particular, on mathematical morphology.4,6,8,9,11,12 Investigations and the preliminary verification of the method have been carried out on complete cycles registered on video in the course of examinations. It is the complete cycles only that allow us to follow the dynamics of cardiac function.
As a result of long-term collaboration with cardiologists, an algorithm has been developed that allows for an automatic LV detection. A precise delineation of its borders allows for an objective description of changes in geometric parameters in the course of the entire cycle and for a quantitative analysis of the left ventricular function.
In this paper, a color image segmentation algorithm and an approach to large-format image segmentation are presented, both focused on breaking down images to semantic objects for object-based multimedia applications. The proposed color image segmentation algorithm performs the segmentation in the combined intensity–texture–position feature space in order to produce connected regions that correspond to the real-life objects shown in the image. A preprocessing stage of conditional image filtering and a modified K-Means-with-connectivity-constraint pixel classification algorithm are used to allow for seamless integration of the different pixel features. Unsupervised operation of the segmentation algorithm is enabled by means of an initial clustering procedure. The large-format image segmentation scheme employs the aforementioned segmentation algorithm, providing an elegant framework for the fast segmentation of relatively large images. In this framework, the segmentation algorithm is applied to reduced versions of the original images, in order to speed-up the completion of the segmentation, resulting in a coarse-grained segmentation mask. The final fine-grained segmentation mask is produced with partial reclassification of the pixels of the original image to the already formed regions, using a Bayes classifier. As shown by experimental evaluation, this novel scheme provides fast segmentation with high perceptual segmentation quality.
This paper proposes a new methodology to extract biometric features of plant leaf structures. Combining computer vision techniques and plant taxonomy protocols, these methods are capable of identifying plant species. The biometric measurements are concentrated in leaf internal forms, specifically in the veination system. The methodology was validated with real cases of plant taxonomy, and eleven species of passion fruit of the genus Passiflora were used. The features extracted from the leaves were applied to the neural network system to perform the classification of species. The results showed to be very accurate in correctly differentiating among species with 97% of success. The computer vision methods developed can be used to assist taxonomists to perform biometric measurements in plant leaf structures.
Digital palaeography is an emerging research area which aims to introduce digital image processing techniques into palaeographic analysis for the purpose of providing objective quantitative measurements. This paper explores the use of a fully automated handwriting feature extraction, visualization, and analysis system for digital palaeography which bridges the gap between traditional and digital palaeography in terms of the deployment of feature extraction techniques and handwriting metrics. We propose the application of a set of features, more closely related to conventional palaeographic assesment metrics than those commonly adopted in automatic writer identification. These features are emprically tested on two datasets in order to assess their effectiveness for automatic writer identification and aid attribution of individual handwriting characteristics in historical manuscripts. Finally, we introduce tools to support visualization of the extracted features in a comparative way, showing how they can best be exploited in the implementation of a content-based image retrieval (CBIR) system for digital archiving.
Image labeling is an important and challenging task in the area of graphics and visual computing, where datasets with high quality labeling are critically needed. In this paper, based on the commonly accepted observation that the same semantic object in images with different resolutions may have different representations, we propose a novel multi-scale cascaded hierarchical model (MCHM) to enhance general image labeling methods. Our proposed approach first creates multi-resolution images from the original one to form an image pyramid and labels each image at different scale individually. Next, it constructs a cascaded hierarchical model and a feedback circle between image pyramid and labeling methods. The original image labeling result is used to adjust labeling parameters of those scaled images. Labeling results from the scaled images are then fed back to enhance the original image labeling results. These naturally form a global optimization problem under scale-space condition. We further propose a desirable iterative algorithm in order to run the model. The global convergence of the algorithm is proven through iterative approximation with latent optimization constraints. We have conducted extensive experiments with five widely used labeling methods on five popular image datasets. Experimental results indicate that MCHM improves labeling accuracy of the state-of-the-art image labeling approaches impressively.
In literature the references to EM estimation of product mixtures are not very frequent. The simplifying assumption of product components, e.g. diagonal covariance matrices in case of Gaussian mixtures, is usually considered only as a compromise because of some computational constraints or limited dataset. We have found that the product mixtures are rarely used intentionally as a preferable approximating tool. Probably, most practitioners do not “trust” the product components because of their formal similarity to “naive Bayes models.” Another reason could be an unrecognized numerical instability of EM algorithm in multidimensional spaces. In this paper we recall that the product mixture model does not imply the assumption of independence of variables. It is even not restrictive if the number of components is large enough. In addition, the product components increase numerical stability of the standard EM algorithm, simplify the EM iterations and have some other important advantages. We discuss and explain the implementation details of EM algorithm and summarize our experience in estimating product mixtures. Finally we illustrate the wide applicability of product mixtures in pattern recognition and in other fields.
The paper is devoted to Descriptive Image Analysis (DA) — a leading line of the modern mathematical theory of image analysis. DA is a logically organized set of descriptive methods, mathematical objects, and models and representations aimed at analyzing and evaluating the information represented in the form of images, as well as for automating the extraction from images of knowledge and data needed for intelligent decision-making.
The basic idea of DA consists of embedding all processes of analysis (processing, recognition, understanding) of images into an image formalization space and reducing it to (1) construction of models/representations/formalized descriptions of images; (2) construction of models/representations/formalized descriptions of transformations over models and representations of images.
We briefly discuss the basic ideas, methodological principles, mathematical methods, objects, and components of DA and the basic results determining the current state of the art in the field. Image algebras (IA) are considered in the context of a unified language for describing mathematical objects and operations used in image analysis (the standard IA by Ritter and the descriptive IA by Gurevich).
Full-color three-dimensional (3D) printing technology is a powerful process to manufacture intelligent customized colorful objects with improved surface qualities; however, poor surface color optimization methods are the main impeding factors for its commercialization. As such, the paper explored the correlation between microstructure and color reproduction, then an assessment and prediction method of color optimization based on microscopic image analysis was proposed. The experimental models were divided into 24-color plates and 4-color cubes printed by ProJet 860 3D printer, then impregnated according to preset parameters, at last measured by a spectrophotometer and observed using both a digital microscope and a scanning electron microscope. The results revealed that the samples manifested higher saturation and smaller chromatic aberration (ΔE) after postprocessing. Moreover, the brightness of the same color surface increased with the increasing soaked surface roughness. Further, reduction in surface roughness, impregnation into surface pores, and enhancement of coating transparency effectively improved the accuracy of color reproduction, which could be verified by the measured values. Finally, the chromatic aberration caused by positioning errors on different faces of the samples was optimized, and the value of ΔE for a black cube was reduced from 8.12 to 0.82, which is undetectable to human eyes.
Finding a parallel architecture adapted to a given class of algorithms is a central problem for architects. This paper presents a methodology to realize it, and provides an illustration using image analysis. First, we show a set of common basic operations that can be used to solve most image analysis problems. Then these movements are translated to fit some natural communications in a given architecture. The considered data movements (global operations on connected pixel sets) can express a large class of algorithms. Their implementation on exemplary massively parallel architectures (arrays, hypercubes, pyramids) is discussed.
A thinning method for binary images is proposed which converts digital binary images into line patterns. The proposed method suppresses shape distortion as well as false feature points, thereby producing more natural line patterns than existing methods. In addition, this method guarantees that the produced line patterns are one pixel in width everywhere. In this method, an input binary image is transformed into a graph in which 1-pixels correspond to nodes and neighboring nodes are connected by edges. Next, nodes unnecessary for preserving the topology of the input image and the edges connecting them are deleted symmetrically. Then, edges that do not contribute to the preservation of the topology of the input image are deleted. The advantages of this graph-based thinning method are confirmed by applying it to ideal line patterns and geographical maps.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.