Search name | Searched On | Run search |
---|---|---|
Keyword: Image Analysis (34) | 28 Mar 2025 | Run |
[Keyword: Van Atta Array] AND [All Books] (2) | 28 Mar 2025 | Run |
Keyword: Misorientation (0) | 28 Mar 2025 | Run |
[in Journal: Analysis and Applications] AND [Keyword: Image Analysis] (1) | 28 Mar 2025 | Run |
Keyword: BMI Composites (1) | 28 Mar 2025 | Run |
You do not have any saved searches
A method for analyzing the structure of the white background in document images is described, along with applications to the problem of isolating blocks of machine-printed text. The approach is based on computational-geometry algorithms for offline enumeration of maximal white rectangles and on-line rectangle unification. These support a fast, simple, and general heuristic for geometric layout segmentation, in which white space is covered greedily by rectangles until all text blocks are isolated. Design of the heuristic can be substantially automated by an analysis of the empirical statistical distribution of properties of covering rectangles: for example, the stopping rule can be chosen by Rosenblatt's perceptron training algorithm. Experimental trials show good behavior on the large and useful class of textual Manhattan layouts. On complex layouts from English-language technical journals of many publishers, the method finds good segmentations in a uniform and nearly parameter-free manner. On a variety of non-Latin texts, some with vertical text lines, the method finds good segmentations without prior knowledge of page and text-line orientation.
Singular Spectrum Analysis is a nonparametric method, which allows one to solve problems like decomposition of a time series into a sum of interpretable components, extraction of periodic components, noise removal and others. In this paper, the algorithm and theory of the SSA method are extended to analyse two-dimensional arrays (e.g. images). The 2D-SSA algorithm based on the SVD of a Hankel-block-Hankel matrix is introduced. Another formulation of the algorithm by means of Kronecker-product SVD is presented. Basic SSA notions such as separability are considered. Results on ranks of Hankel-block-Hankel matrices generated by exponential, sine-wave and polynomial 2D-arrays are obtained. An example of 2D-SSA application is presented.
The human body’s major organ is the skin, and it protects human beings from the outside environment. Detecting skin disease at an earlier stage is a big challenge because of the similar appearance of skin disease. Although skilled dermatologists find it challenging to forecast skin lesions due to lack of contrast between adjoining tissues. Therefore, there is a need for an automated system that can detect skin lesions timely and precisely. Recently Deep Learning (DL) has attained outstanding success in the diagnosis of various diseases. Thus, in this paper, a transfer learning-based model has been proposed with help of pre-trained Xception model. The Xception model was modified by adding layers such as one pooling layer, two dense layers and one dropout layer. A new Fully Connected (FC) layer changed the original Fully Connected (FC) layer with seven skin disease classes. The proposed model has been evaluated on a HAM10000 dataset with large class imbalances. The data augmentation techniques were applied to overcome the unbalancing in the dataset. The new results showed that the model has attained an accuracy of 96.40% for classifying skin diseases. The proposed model is working best on Benign Keratosis and the values of precision, sensitivity and F1 score are 99%, 97% and 0.98 respectively. This method can provide patients and doctors with a good notion of whether or not medical assistance is required, thus, avoiding undue stress and false alarms.
Full-color three-dimensional (3D) printing technology is a powerful process to manufacture intelligent customized colorful objects with improved surface qualities; however, poor surface color optimization methods are the main impeding factors for its commercialization. As such, the paper explored the correlation between microstructure and color reproduction, then an assessment and prediction method of color optimization based on microscopic image analysis was proposed. The experimental models were divided into 24-color plates and 4-color cubes printed by ProJet 860 3D printer, then impregnated according to preset parameters, at last measured by a spectrophotometer and observed using both a digital microscope and a scanning electron microscope. The results revealed that the samples manifested higher saturation and smaller chromatic aberration (ΔEΔE) after postprocessing. Moreover, the brightness of the same color surface increased with the increasing soaked surface roughness. Further, reduction in surface roughness, impregnation into surface pores, and enhancement of coating transparency effectively improved the accuracy of color reproduction, which could be verified by the measured values. Finally, the chromatic aberration caused by positioning errors on different faces of the samples was optimized, and the value of ΔEΔE for a black cube was reduced from 8.12 to 0.82, which is undetectable to human eyes.
The ILF (Image Landscapes’ Fractal Dimension) method and DF2dDF2d method obtained by a 2D generalization of Higuchi’s algorithm were applied to a set of 120 digital histological images of Anal Intraepithelial Neoplasia (AIN). The main goal of this research was to examine accuracy that means sensitivity and specificity of these methods and compare the applicability of both methods in the quantitative characterization and differentiation of clinical cases of AIN. Histological examination by an experienced pathologist revealed three grades of AIN tumors in the 120 histological slices: 36 of AIN1, 56 of AIN2 and 28 of AIN3. Statistical tests showed significant differences between calculated fractal dimension values in three datasets (AIN1, AIN2 and AIN3) using ILF and DF2dDF2d methods at the level of significance of 0.05. Application of the ILF and DF2dDF2d methods has an advantage when it comes to speed, accuracy, simplicity and time necessary for analysis. Both methods can be successfully applied for differentiation between AIN stages giving practically the same results. They can easily be adapted to other histological specimen.
This paper studies on interactions between morphological changes and wave and current fields around the Tenryu River mouth during a severe storm. Installing six cameras, authors successfully captured collapse of sand bar around the Tenryu River mouth when typhoon T0704 hit the Pacific Coast of Japan in July 2007. Obtained images were analyzed based on several image processing techniques and, coupled with the other hydrodynamic data, showed clear evidence for interactive features of bathymetry changes and surrounding wave and current fields. Finally, a numerical model based on depth-integrated non-linear shallow water equations and energy balance equations were applied to the observed conditions and it was found that topography changes during the storm was one of the most essential factors that determine the characteristics of surrounding wave and current fields.
The paper is devoted to Descriptive Image Analysis (DA) — a leading line of the modern mathematical theory of image analysis. DA is a logically organized set of descriptive methods, mathematical objects, and models and representations aimed at analyzing and evaluating the information represented in the form of images, as well as for automating the extraction from images of knowledge and data needed for intelligent decision-making.
The basic idea of DA consists of embedding all processes of analysis (processing, recognition, understanding) of images into an image formalization space and reducing it to (1) construction of models/representations/formalized descriptions of images; (2) construction of models/representations/formalized descriptions of transformations over models and representations of images.
We briefly discuss the basic ideas, methodological principles, mathematical methods, objects, and components of DA and the basic results determining the current state of the art in the field. Image algebras (IA) are considered in the context of a unified language for describing mathematical objects and operations used in image analysis (the standard IA by Ritter and the descriptive IA by Gurevich).
Telomere length is an important indicator of proliferative cell history and potential. Decreasing telomere length in the cells of an immune system can indicate immune aging in immune-mediated and chronic inflammatory diseases. Quantitative fluorescent in situ hybridization (Q-FISH) of a labeled (C3TA2)32)3 peptide nucleic acid probe onto fixed metaphase cells followed by digital image microscopy allows the evaluation of telomere length in the arms of individual chromosomes. Computer-assisted analysis of microscopic images can provide quantitative information on the number of telomeric repeats in individual telomeres. We developed new software to estimate telomere length. The MeTeLen software contains new options that can be used to solve some Q-FISH and microscopy problems, including correction of irregular light effects and elimination of background fluorescence. The identification and description of chromosomes and chromosome regions are essential to the Q-FISH technique. To improve the quality of cytogenetic analysis after Q-FISH, we optimized the temperature and time of DNA-denaturation to get better DAPI-banding of metaphase chromosomes. MeTeLen was tested by comparing telomere length estimations for sister chromatids, background fluorescence estimations, and correction of nonuniform light effects. The application of the developed software for analysis of telomere length in patients with rheumatoid arthritis was demonstrated.
Polymer/metal composites (PMC) comprising of polyvinylidene fluoride/nanocrystalline nickel with varying volume fractions of nickel (fconfcon) prepared under cold press show an insulator to metal transition (IMT) at percolation threshold (fc=fcon=0.27fc=fcon=0.27). The two kinds of generalized Johnscher’s universal dielectric response (UDR) laws on both sides of IMT hold good, while for the percolative sample, none of the two laws hold good. Neither the concept of dipolar relaxation nor anomalous low frequency dispersion stands valid for fc=0.27fc=0.27, while a completely different, neutral and competing electrical behavior is observed over the entire range of frequencies. The emerged third kind of Johnscher’s like UDR for fcfc is observed and the relaxation law has been formulated as the ratio of imaginary and real parts of dielectric constant remains constant over the entire range of frequency starting from dc to any higher frequency. The value of the constant is attributed to depend on the PMC, the dielectric constant of the polymer, the differences of conductivity and fractions of the components of the PMC and also on their connectivity arising due to the difference of their process conditions. The emerged unique dielectric relaxation consists of multiple relaxations arising due to the combination of other relaxations (arising due to the two different types of species) present in the sample, fcon=0.27fcon=0.27. This novel material may be suitable for certain specific applications in electrical and electronics engineering.
The purpose of this chapter is to provide a perspective on the current techniques in the imaging analysis of the three-dimensional architecture of trabecular bone and their relevance to the diagnosis of osteoporosis. The emphasis lies on the analysis of images obtained by high resolution X-ray-based CT and MRI techniques. The description of these acquisition techniques is followed by a presentation of the most common image processing methods. Different approaches (morphological, topological, fractal, etc.) used to derive the main architectural features of trabecular bone are illustrated and discussed.
In this paper, a color image segmentation algorithm and an approach to large-format image segmentation are presented, both focused on breaking down images to semantic objects for object-based multimedia applications. The proposed color image segmentation algorithm performs the segmentation in the combined intensity–texture–position feature space in order to produce connected regions that correspond to the real-life objects shown in the image. A preprocessing stage of conditional image filtering and a modified K-Means-with-connectivity-constraint pixel classification algorithm are used to allow for seamless integration of the different pixel features. Unsupervised operation of the segmentation algorithm is enabled by means of an initial clustering procedure. The large-format image segmentation scheme employs the aforementioned segmentation algorithm, providing an elegant framework for the fast segmentation of relatively large images. In this framework, the segmentation algorithm is applied to reduced versions of the original images, in order to speed-up the completion of the segmentation, resulting in a coarse-grained segmentation mask. The final fine-grained segmentation mask is produced with partial reclassification of the pixels of the original image to the already formed regions, using a Bayes classifier. As shown by experimental evaluation, this novel scheme provides fast segmentation with high perceptual segmentation quality.
The detection of knee osteoarthritis (OA) is a subjective task, and even two highly experienced and well-trained readers might not always agree on a specific case. This problem is noticeable in OA population studies, in which different scoring projects provide significantly different scores for the same knee X-rays. Here we propose a method for quantitative assessment and comparison of knee X-ray scoring projects in OA population studies. The method works by applying an image analysis method that automatically detects OA in knee X-ray images, and comparing the consistency of the scores when using each of the scoring projects as "gold standard." The method was applied to compare the osteoarthritis initiative (OAI) clinic reading derived Kellgren and Lawrence (K&L) scores to central reading, and showed that when using the derived K&L scores the automatic image analysis method was able to accurately differentiate between healthy joints and moderate OA joints in ~70% of the cases. When the OAI central reading scores were used as gold standard, the detection accuracy was elevated to ~77%. These results show that the OAI central readings scores are more consistent with the X-rays, indicating that the central reading better reflects the radiographic features associated with OA, compared to the OAI K&L scores derived from clinic readings.
Image processing and analysis in fuzzy set theoretic framework is addressed. Various uncertainties involved in these problems and the relevance of fuzzy set theory in handling them are explained. Different image ambiguity measures based on fuzzy entropy and fuzzy geometry of image subsets are mentioned. A discussion is made on the flexibility in choosing membership functions. Illustrations of commonly used fuzzy image processing operations such as enhancement, edge detection segmentation, skeleton extraction, feature extraction are then provided, along with their significance and characteristics. Their applications to some real life problems, e.g., motion frame analysis, remotely sensed image analysis, modeling face images are finally described. An extensive bibliography is also provided.
Watermarking is now considered as an efficient means for assuring copyright protection and data owner identification. Watermark embedding techniques depend on the representation domain of the image (spatial, frequency, and multiresolution). Every domain has its specific advantages and limitations. Moreover, each technique in a chosen domain is found to be robust to specific sets of attack types. So we need to propose more robust domains to defeat these limitations and respect all the watermarking criterions (capacity, invisibility and robustness). In this paper, a new watermarking method is presented using a new domain for the image representation and the watermark embedding: the mathematical Hessenberg transformation. This domain is found to be robust against a wide range of STIRMARK attacks such as JPEG compression, convolution filtering and noise adding. The robustness of the new technique in preserving and extracting the embedded watermark is proved after various attacks types. It is also improved when compared with other methods in use. In addition, the proposed method is blind and the use of the host image is not needed in the watermark detection process.
This work presents a preliminary investigation on the use of a Particle Swarm Optimization (PSO) algorithm variant for Pattern Matching in image analysis. Providing each particle with its own target and having them organized with the classical Von Neumann topology is shown to be a feasible way to obtain a swarm able to locate a pattern on a digital image.
Some preliminary tests on synthetic images show the effectiveness of the modified swarm algorithm, highlighting its insensitivity to basic transforms like mirroring, scaling and perspective deformations of the pattern.
The assessment of the skin surface is of a great importance in the dermocosmetic field to evaluate the response of individuals to medical or cosmetic treatments. In vivo quantitative measurements of changes in skin topographic structures provide a valuable tool, thanks to noninvasive devices. However, the high cost of the systems commonly employed is limiting, in practice, the widespread use of these devices for a routine-based approach. In this work we resume the research activity carried out to develop a compact low-cost system for skin surface assessment based on capacitive image analysis. The accuracy of the capacitive measurements has been assessed by implementing an image fusion algorithm to enable a comparison between capacitive images and the ones obtained using high-cost profilometry, the most accurate method in the field. In particular, very encouraging results have been achieved in the measurement of the wrinkles' width. On the other hand, experiments show all the native design limitations of the capacitive device, primarily conceived to work with fingerprints, to measure the wrinkles' depth, which point toward a specific re-designing of the capacitive device.
Natural fibre reinforced thermoplastic composites find a wide array of applications in the automobile, building and construction industries. These composites are mostly produced by injection moulding or extrusion through properly designed dies. During these production processes, the shear forces exerted by the screw or ram leads to the degradation of the natural fibres. A screwless extruder that minimises fibre degradation and employs a reliable and low technology process has already been developed. However, the fibre degradation caused by the screwless extruder has not been compared with that of the conventional extruders. So, this study is focused on the influence of extrusion processes on the degradation of natural fibres in thermoplastic composites. Sisal fibres of 10 mm length were extruded with polypropylene, to furnish extrudates with a fibre mass fraction of 25%, using conventional single screw and screwless extruders. Polypropylene in the extrudates was dissolved in Xylene in a Sohxlet process; the fibres that were extracted were analysed for length variations. While fibre degradation in the form of fibre length variation is similar in both cases, this can be minimised in screwless extrusion by extending the gap between the front face of the cone and the orifice plate.
BaTiO3 (BTO) is considered the most commonly used ceramic material in multilayer ceramic capacitors due to its desirable dielectric properties. Considering that the miniaturization of electronic devices represents an expanding field of research, modification of BTO has been performed to increase dielectric constant and DC bias characteristic/sensitivity. This research presents the effect of N2 and air atmospheres on morphological and dielectric properties of BTO nanoparticles modified with organometallic salt at sintering temperatures of 1200∘1200∘C, 1250∘1250∘C, 1300∘1300∘C, and 1350∘1350∘C. Measured dielectric constants were up to 35,000, with achieved very high values in both atmospheres. Field emission scanning electron microscopy (FESEM) was used for morphological characterization, revealing a porous structure in all the samples. The software image analysis of FESEM images showed a connection between particle and pore size distribution, as well as porosity. Based on the data from the image analysis, the prediction of dielectric properties in relation to morphology indicated that yttrium-based organometallic salt reduced oxygen vacancy generation in N2 atmosphere. DC bias sensitivity measurements showed that samples with higher dielectric constant had more pronounced sensitivity to voltage change, but most of the samples were stable up to 100 V, making our modified BTO a promising candidate for capacitors.
Image labeling is an important and challenging task in the area of graphics and visual computing, where datasets with high quality labeling are critically needed. In this paper, based on the commonly accepted observation that the same semantic object in images with different resolutions may have different representations, we propose a novel multi-scale cascaded hierarchical model (MCHM) to enhance general image labeling methods. Our proposed approach first creates multi-resolution images from the original one to form an image pyramid and labels each image at different scale individually. Next, it constructs a cascaded hierarchical model and a feedback circle between image pyramid and labeling methods. The original image labeling result is used to adjust labeling parameters of those scaled images. Labeling results from the scaled images are then fed back to enhance the original image labeling results. These naturally form a global optimization problem under scale-space condition. We further propose a desirable iterative algorithm in order to run the model. The global convergence of the algorithm is proven through iterative approximation with latent optimization constraints. We have conducted extensive experiments with five widely used labeling methods on five popular image datasets. Experimental results indicate that MCHM improves labeling accuracy of the state-of-the-art image labeling approaches impressively.
In literature the references to EM estimation of product mixtures are not very frequent. The simplifying assumption of product components, e.g. diagonal covariance matrices in case of Gaussian mixtures, is usually considered only as a compromise because of some computational constraints or limited dataset. We have found that the product mixtures are rarely used intentionally as a preferable approximating tool. Probably, most practitioners do not “trust” the product components because of their formal similarity to “naive Bayes models.” Another reason could be an unrecognized numerical instability of EM algorithm in multidimensional spaces. In this paper we recall that the product mixture model does not imply the assumption of independence of variables. It is even not restrictive if the number of components is large enough. In addition, the product components increase numerical stability of the standard EM algorithm, simplify the EM iterations and have some other important advantages. We discuss and explain the implementation details of EM algorithm and summarize our experience in estimating product mixtures. Finally we illustrate the wide applicability of product mixtures in pattern recognition and in other fields.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.