Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This letter presents the modelling of a morphological thinning algorithm suggested by Jang and Chin [1] on the four models of shared memory SIMD computers. The time and cost complexity analyses for the models have been given. The performance of this algorithm on SIMD computers has been compared with the performance of a conventional thinning algorithm [2] proposed recently.
This paper is aimed at 3D object understanding from 2D images, including articulated objects in active vision environment, using interactive, and internet virtual reality techniques. Generally speaking, an articulated object can be divided into two portions: main rigid portion and articulated portion. It is more complicated that "rigid" object in that the relative positions, shapes or angles between the main portion and the articulated portion have essentially infinite variations, in addition to the infinite variations of each individual rigid portions due to orientations, rotations and topological transformations. A new method generalized from linear combination is employed to investigate such problems. It uses very few learning samples, and can describe, understand, and recognize 3D articulated objects while the objects status is being changed in an active vision environment.
An improved scheme for integrity protection of binary images representing text documents based on the topologies of these images is proposed. The image skeleton and the inverse skeleton are found through thinning and the skeleton signature is combined with watermark information. The result is encrypted asymmetrically and hidden in embeddable locations of the image. A series of attack experiments are conducted to demonstrate that the approach is capable of detecting tampering. Even single malicious pixel modifications are detected. The approach has a lower computation cost than previous methods.
Perception of content displayed on the screen of a computer display using computer vision is a challenging topic if the treated target is changed from physical world to digital world. Screen area from the given computer display image should be segmented and corrected primarily before perceiving the content displayed on the screen. An automatic approach is proposed to the segmentation and deformation correction of screen area for a computer display image. Due to some inherent characteristics existing on ordinary computer displays, the segmentation can be performed by contour tracing. After contouring the screen area, its four corner locations can be readily identified. By approximating the obtained corners to the closest normal screen region, the deformed screen image can be further restored with affine transformation. As a computer vision application on the "look at" screen image, the effectively segmented screen region can be fixed after a little time. The experiments demonstrate that about 70% cases can be fixed under 33 processed frames, others under 51 processed frames, and thus confirm the feasibility of the proposed approach.
Reducing branching effect and increasing boundary noise immunity are of great importance for thinning patterns. An approach based on medial axis transform (MAT) to obtain a connected 1-pixel wide skeleton with few redundant branches is presented in this paper. Though the obtained skeleton by MAT is isotropic with few redundant branches, however, the skeleton points are usually disconnected. In order to rend the merits of the MAT and avoid its disadvantages, the proposed approach is composed of distance-map generation, grouping, ridge-path linking, and refining to obtain the connected 1-pixel wide thin line. The ridge-path linking strategy can guarantee the skeletons connected, whereas the refining process can be readily performed by a conventional thinning process to obtain the 1-pixel wide thinned pattern. The performances investigated by branching effect, signal-to-noise ratio (SNR), and measurement of skeleton deviation (MSD) confirm the feasibility of the proposed MAT-based thinning for line patterns.
Segmented machine-printed Chinese characters generally suffer from small distortions and small rotations due to noise and segmentation errors. These phenomena cause many conventional methods, especially those based on directional codes, to be unable to reach very high recognition rates, say above 99%. In this paper, regressional analysis is proposed as a means to overcome these problems. Firstly, thinning is applied to each segmented character, which is enclosed in a proper square box and also filtered for noise reduction beforehand. Secondly, the square thinned character image is divided into 9×9 meshes (blocks), instead of the conventional 8×8, for reasons of the Chinese character's characteristics and also for global feature extraction. Thirdly, line regression is applied, for all black points in each block, to obtain either the value of the slope angle, or a dispersion code which is derived from the sample correlation coefficient after proper transformation. Thus, each block is coded by one of three cases: 'blank', value of slope angle, or 'dispersion'. The peripheral blacks are used for preclassification. Proper scores for matching two characters are designed so that learning and recognition are quite efficient. The objective of designing this optical character recognition system is to get very small misrecognition rates and tolerable rejection rates. Experiments with three fonts, each consisting of 5401 characters, were carried out. The overall rejection rate is 1.25% and the overall misrecognition rate is 0.33%. These are acceptable for most users.
A thinning method for binary images is proposed which converts digital binary images into line patterns. The proposed method suppresses shape distortion as well as false feature points, thereby producing more natural line patterns than existing methods. In addition, this method guarantees that the produced line patterns are one pixel in width everywhere. In this method, an input binary image is transformed into a graph in which 1-pixels correspond to nodes and neighboring nodes are connected by edges. Next, nodes unnecessary for preserving the topology of the input image and the edges connecting them are deleted symmetrically. Then, edges that do not contribute to the preservation of the topology of the input image are deleted. The advantages of this graph-based thinning method are confirmed by applying it to ideal line patterns and geographical maps.
For an image consisting of wire-like patterns, skeletonization (thinning) is often necessary as the first step towards feature extraction. But a serious problem which exists is that the intersections of the lines (X-crossings) will be elongated when applying a thinning algorithm to the image. That is, X-crossings are usually difficult to be preserved as the result of thinning. In this paper, we present a non-iterative line thinning method that preserves X-crossings of the lines in the image. The skeleton is formed by the mid-points of run-length encoding of the patterns. Line intersection areas are identified via a histogram analysis of the lengths of runs, and intersections are detected at locations where the sequences of runs merge or split.
In this paper, we propose the Cross Section Sequence Graph which describes line images in a simple and well structured form. It is composed of regular regions called cross section sequences and singular regions. A cross section sequence is a sequence of cross sections, each of which is constructed as a pair of boundary points almost perpendicular to the direction of the line. The sequence corresponds to a straight or curved line segment. The remaining regions are extracted as singular regions, each of which corresponds to an end point region, corner, branch, cross, and so on. The cross section sequence graph is useful for many kinds of feature extraction, especially for skeletonization since a singular region can be analyzed from adjacent regular regions. Experimental results show that the skeleton extracted from the cross section sequence graph is better than that of a pixel-wise skeletonization (thinning) in terms of both processing speed and the quality of the skeleton.
A 4-subiteration parallel thinning algorithm, based on 3×3 operations, is proposed. It is shown that by taking into account bidirectional compression in each subiteration, pixels belonging to a pair of successive contours, a 4-contour and an 8-contour, are removed from the pattern in every iteration. Therefore, contour pixel removal proceeds towards the inner part of the pattern according to the octagonal metric. This provides a resulting medial line which is centered in the pattern in a quasi-Euclidean sense and is less sensitive to pattern rotation. The performance of the algorithm is discussed and compared with that of some well-known parallel algorithms.
One of the most widely used methods for preprocessing binary images is thinning. The popularity of this method rests on the fact that considerable data reduction is achieved while retaining “essential” properties of the original image. Moreover, topological features, which cannot be verified by a genuinely parallel method (by Minsky and Papert22) are more easily treated in thinned images. For these reasons, many articles on this topic were published in the literature. Most of them are concerned with modifications of existing methods in order to yield “nicer” results out of the thinning process. Also, many results of numerical experiments are available in different publications.
The aim of this paper is to show that the quite natural requirement of invariance of the results obtained by thinning leads nearly automatically to a method proposed by the authors in different publications. Moreover, this method is genuinely parallel and well-defined so that it is possible to investigate it theoretically.
The practical feasibility of the method is also discussed.
This paper describes a reconstructable thinning process which is based on one-pass parallel thinning and the morphological skeleton transformation. It reduces a binary digital pattern into a unit-width connected skeleton enabling perfect reconstruction of the original pattern. The process uses thinning templates to iteratively remove boundary pixels and structuring templates of the morphological skeleton transformation to retain critical feature pixels for reconstruction. The thinning templates together with the extracted feature pixels ensure skeletal connectivity, unit width, and reconstructability. These essential properties are guaranteed regardless of the chosen structuring templates used in the morphological skeleton transformation. The thinning process is analyzed and results are presented. A number of implementation issues, such as the choice of structuring templates, the computational model, noise filtering, and computational efficiency, are also addressed.
As a result of its central role in the preprocessing of image patterns, or because of its intrinsic appeal, the design of skeletonization algorithms has been a very active research area. However, few attempts have been made to evaluate the performance of different skeletonization algorithms.
This paper presents the results of experiments to evaluate the performance of 20 skeletonization algorithms previously published in the literature. These algorithms have been implemented on the SUN 3/60 workstation in C and tested with a large variety of character patterns. A systematic comparison of these algorithms has been made based on the following criteria: reconstructibility, computation speed, similarity to the reference skeleton, quality of the skeleton, connectivity after skeletonization, and the degree of parallelism.
A metric defines the distance between any two points. The “natural” metrics of the digital world do not approximate the Euclidean metric of the continuous world well. Skeletonization (sometimes named topology preserving shrinking or homotopic thinning) is one example in which this leads to unacceptable results. In the present work we propose and demonstrate skeletonization using path-based metrics which are a better approximation of the Euclidean metric. Moreover, we achieve a good performance on sequential processors by processing each pixel only once in the calculations of binary (Hilditch) and grey-value (upper) skeletons.
The stroke analysis method is an effective approach for handwritten Chinese character recognition. But as we know, it is very difficult to accurately extract the strokes. In this paper, a robust stroke extraction method is proposed. First, smoothing and thinning processes are applied to smooth the shape and to obtain the skeleton of the observed character. Then the end point, internal point and fork point are detected by calculating their own crossing numbers while the corner points are determined by a knowledge-based iterative method. Virtual-end-points are introduced for separating a stroke into a certain number of line segments without losing the connection relations among them. By representing each line segment as a vertex and the connection relation of two segments as an edge, the observed character can be represented by an attributed graph. Finally, a stroke extraction procedure is proposed to extract the strokes from the global structures of the character. After each stroke of a character is extracted, the cross points can also be determined. Experimental results have shown that the proposed method is more effective than the other methods.3,5−6
We propose a new sequential width-independent thinning algorithm for binary images. The algorithm uses a 4-neighbor distance transformation, and is designed to preserve the topological properties and shape of the object to be thinned. Comparison of the run time with a standard algorithm is given.
In this paper a new thinning algorithm is presented. It is simple yet robust, and can maintain the same advantages of other key skeletonization algorithms. In this new algorithm, only a very small set of rules of criteria for deleting pixels is used. It is faster and easier to implement. Its advantages and limitations are discussed and compared with others. Several examples are illustrated.
For reliable recognition of the handwritten Korean characters, stroke extraction is one of the most important tasks. Classical approaches are based on the thinning operation and they have a limitation due to the problem of distorting original pattern shape. In this paper, we propose a shape decomposition algorithm that decomposes a handwritten Korean character pattern into a set of near-convex parts. The algorithm is designed based on the observation that Korean character patterns can be decomposed using two shape features called T-junction and B-junction. To handle robustly complex situations in which three or more strokes meet at a joint, we detect joint parts from the shape-decomposed pattern. We classify the parts into several types depending on their interrelationships. A line segment is extracted from a part such that it best represents the part and keeps connectivity with neighboring parts. We develop five criteria for the quality evaluation of our method and the thinning-based methods. The evaluation shows that our method is superior to the thinning methods and our results are very promising for the subsegment processes like recognition.
In this article, the new concept of diffusion fields based on partial differential equations is applied to character image processing. Specific diffusion fields are developed according to character image structures and features, depending, on the scope of application. Doing so allows the application of a straightforward one-dimensional numerical scheme to image enhancement, erosion, dilation and thinning. The strength of this approach is the flexibility brought by the diffusion field, which can be defined taking into account specific difficulties of grayscale character images with a minimum of prior information. Thus, the application of the algorithm is shown to be robust to singularity points, the creation of spurious branches, variations in stroke thickness and intensity, multimodality, noise and image background patterns. The resulting enhanced images are noise free with sharp edges and the local typical intensity levels preserved. Thinned characters are connected skeletons located on the ridge of the initial character. Again, the typical intensity of the character and background are preserved.
In this paper, we propose a novel skew estimation technique for binary document images based on Boundary Growing Method (BGM), thinning and moments. BGM helps in extracting the text line blocks from the document. Thinning1 is performed to fit the best line for extracted text line blocks. Further, skew is computed for thinned line using second order moments. Several experiments have been conducted on various types of documents such as documents containing south Indian languages, English documents, journals, text with picture, noisy images, and document with different fonts and resolutions, to reveal the robustness of the proposed method. Based on the experimental results we have realized that the proposed method outperforms existing methods both in terms of mean and standard deviation.