Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Similarity-based retrieval from shape databases typically employs a pairwise shape matcher and one or more indexing techniques. In this paper, we focus specifically on the design of a pairwise matcher for retrieval of 2-D shape contours. In the past, the matchers used for the one-to-many problem of shape retrieval were often designed for the problem of matching an isolated pair of shapes. This approach fails to exploit two characteristics of the one-to-many matching problem that distinguish it from the one-to-one matching problem. First, the output of shape retrieval systems tends to be dominated by matches to relatively similar shapes. In this paper, we demonstrate that by not expending computational resources on unneeded accuracy of matching, both the speed and the accuracy of retrieval can be increased. Second, the shape database is a large statistical sample of the population of shapes. We introduce a probabilistic model for exploiting that statistical knowledge to further increase retrieval accuracy. The model has several benefits: (1) It does not require class labels on the database shapes, thus supporting unlabeled retrieval. (2) It does not require feature independence. (3) It is parameter-free. (4) It has a fast runtime implementation. The probabilistic model is general and thus potentially applicable to other one-to-many matching problems.
In this paper, we present a novel method of using two-level similarity measures for shape-based image retrieval. We first identify the dominant points of a given shape, and then calculate their geometric moments and the distances between two consecutive dominant points. A spectrum representing the normalized geometric moments versus normalized distances is generated, and its area and curve length are computed. We use these two values as similarity features for the indexes in coarse-grained shape retrieval. Furthermore, we use the cross-sectional area and curve length distribution for the indexes in fine-grained shape retrieval. Experimental results show that the proposed method is simple and efficient and can reach the accuracy rate of 95%.
We introduce two complementary feature extraction methods for shape similarity based retrieval of 3D object models. The proposed methods lead us to achieve effectiveness and robustness in searching similar 3D models, and eventually support two essential query modes, namely, query by 3D model and query by 2D image. Our feature extraction scheme is inspired by the observation of human behavior in recognizing 3D objects. The process of extracting spatial arrangement from a 3D object can be considered as using human tactile sensation without visual information. On the other hand, the process of extracting 2D features from multiple views can be considered as examining an object by moving the viewing points (or camera positions). We propose a hybrid method of 3D model identification by object-centered feature extraction, which utilizes the Extended Gaussian Image (EGI) surface normal distribution and distance distributions between object surface points and origin. Another technique need in parallel is a hybrid method using view-centered features, which adopts simple geometric attributes such as circularity, rectangularity and eccentricity. To generate a signature for view-centered features, we have measured distances of a feature between different views and constructing histogram of the distance. We also address the fundamental problem of obtaining sample points on an object surface, which is important to extract reliable features from the object model.
In this paper, a new 2D shape Multiscale Triangle-Area Representation (MTAR) method is proposed. This representation utilizes a simple geometric principle, that is, the area of the triangles formed by the shape boundary points. The wavelet transform is used for smoothing and decomposing the shape boundaries into multiscale levels. At each scale level, a TAR image and the corresponding Maxima-Minima lines are obtained. The resulting MTAR is more robust to noise, less complex, and more selective than similar methods such as the curvature scale-space (CSS). Furthermore, the MTAR is invariant to the general affine transformations. The proposed MTAR is tested and compared to the CSS method using MPEG-7 CE-shape-1 part B and Columbia Object Image Library (COIL-20) datasets. The results show that the proposed MTAR outperforms the CSS method for the conducted tests.