Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Orthogonal Fourier–Mellin (OFM) moments have better feature representation capabilities, and are more robust to image noise than the conventional Zernike moments and pseudo-Zernike moments. However, OFM moments have not been extensively used as feature descriptors since they do not possess scale invariance. This paper discusses the drawbacks of the existing methods of extracting OFM moments, and proposes an improved OFM moments. A part of the theory, which proves the improved OFM moments possesses invariance of rotation and scale, is given. The performance of the improved OFM moments is experimentally examined using trademark images, and the invariance of the improved OFM moments is shown to have been greatly improved over the current methods.
This paper introduces a family of rectangularity measures. The measures depend on two parameters which enable their flexibility, i.e. the possibility to adapt with respect to a concrete application. Several rectangularity measures exist in the literature, and they are designed to evaluate numerically how much the shape considered differs from a perfect rectangle. None of these measures distinguishes rectangles whose edge ratios differ, i.e. they assume that all rectangles (including squares) have the same shape. Such property can be a disadvantage in applications. In this paper, we consider differently elongated rectangles to have different shapes, and propose a family of new rectangularity measures which assigns different values to rectangles whose edge ratios differ. The new rectangularity measures are invariant with respect to translation, rotation and scaling transformations. They range over the interval ]0, 1] and attain the value 1 only for perfect rectangles with a desired edge ratio.
In this paper, we propose a simple, but efficient method to recognize two-dimensional shapes without regard to their translation, rotation, and scaling factors. In our scheme, we use all of the boundary points to calculate the first principal component, which is the first shape feature. Next, by dividing the boundary points into groups by projecting them onto the first principal component, each shape is partitioned into several blocks. These blocks are processed separately to produce the remaining shape features. In shape matching, we compare two shapes by calculating the difference between the two sets of features to see whether the two shapes are similar or not.
The amount of storage used to represent a shape in our method is fixed, unlike most other shape recognition schemes. The time complexity of our shape matching algorithm is also O(n), where n is the number of blocks. Therefore, the matching algorithm takes little computation time, and is independent of translation, rotation, and scaling of shapes.