Please login to be able to save your searches and receive alerts for new content matching your search criteria.
A flexible description of images is offered by a cloud of points in a feature space. In the context of image retrieval such clouds can be represented in a number of ways. Two approaches are here considered. The first approach is based on the assumption of a normal distribution, hence homogeneous clouds, while the second one focuses on the boundary description, which is more suitable for multimodal clouds. The images are then compared either by using the Mahalanobis distance or by the support vector data description (SVDD), respectively.
The paper investigates some possibilities of combining the image clouds based on the idea that responses of several cloud descriptions may convey a pattern, specific for semantically similar images. A ranking of image dissimilarities is used as a comparison for two image databases targeting image classification and retrieval problems. We show that combining of the SVDD descriptions improves the retrieval performance with respect to ranking, on the contrary to the Mahalanobis case. Surprisingly, it turns out that the ranking of the Mahalanobis distances works well also for inhomogeneous images.
The paper elaborates on the encoding and decoding of numerical and nonnumerical data. Proposed are general criteria leading to the distortion-free interfacing mechanisms that help transform information between the systems (or modelling environments) operating at different levels of information granularity. Distinguished are three basic categories of information: numerical, interval-valued, and linguistic (fuzzy). As all of them are dealt with here, the paper subsumes the current studies concentrated exclusively on representing fuzzy sets through their numerical representatives (prototypes). The algorithmic framework in which the distortion-free interfacing is completed is realized through neural networks. Each category of information is treated separately and gives rise to its own specialized architecture of the neural network. Similarly, these networks require carefully designed training sets that fully capture the specificity of the reconstruction problem. Several carefully selected numerical examples are aimed at the illustration of the key ideas.
A method of information processings based on the classical field theory is outlined to derive the modal-wavelet transform (MWT) as a wavelet-like orthonormal transform. The theoretical background and application of MWT are described. The bases of MWT are derived from modal analysis of the potential field equations. Namely, a principal idea of MWT is that a numerical data set is regarded as a set of the field potentials or source densities. A modal matrix, constituting characteristic vectors, derived from the discretized field equations enables us to carry out an orthonormal transform inasmuch as the same way as those of conventional discrete wavelets. MWT is based on this data modeling to provide multiresolution analysis in an efficient manner. Three-dimensional MWT demonstrates a classification of a weather satellite infrared animation into background and cloud-moving frame images.
The artificial neural network has a very good application in pattern recognition and classification, and the characteristics of high-speed parallel computation of FPGA can be used in the hardware realization of neural network. Through the design of hardware implementation of neural network unit, the proposed method was introduced and realized the design of incentive function, neuron in MAC address module and data storage module, as well as hardware. There are also analysis and comparison of the common methods of the realization of the incentive function. This paper proposed to solve the data representation and multiplication accumulation calculation process on the hardware platform. It provides an important foundation and necessary precondition for the construction of hardware neural network.