The relationship between the absence of redundancy in relational databases and fourth normal form (4NF) is investigated. A relation scheme is defined to be redundant if there exists a legal relation defined over it which has at least two tuples that are identical on the attributes in a functional dependency (FD) or multivalued dependency (MVD) constraint. Depending on whether the dependencies in a set of constraints or the dependencies in the closure of the set is used, two different types of redundancy are defined. It is shown that the two types of redundancy are equivalent and their absence in a relation scheme is equivalent to the 4NF condition.
Spatial and intensity normalizations are nowadays a prerequisite for neuroimaging analysis. Influenced by voxel-wise and other univariate comparisons, where these corrections are key, they are commonly applied to any type of analysis and imaging modalities. Nuclear imaging modalities such as PET-FDG or FP-CIT SPECT, a common modality used in Parkinson’s disease diagnosis, are especially dependent on intensity normalization. However, these steps are computationally expensive and furthermore, they may introduce deformations in the images, altering the information contained in them. Convolutional neural networks (CNNs), for their part, introduce position invariance to pattern recognition, and have been proven to classify objects regardless of their orientation, size, angle, etc. Therefore, a question arises: how well can CNNs account for spatial and intensity differences when analyzing nuclear brain imaging? Are spatial and intensity normalizations still needed? To answer this question, we have trained four different CNN models based on well-established architectures, using or not different spatial and intensity normalization preprocessings. The results show that a sufficiently complex model such as our three-dimensional version of the ALEXNET can effectively account for spatial differences, achieving a diagnosis accuracy of 94.1% with an area under the ROC curve of 0.984. The visualization of the differences via saliency maps shows that these models are correctly finding patterns that match those found in the literature, without the need of applying any complex spatial normalization procedure. However, the intensity normalization — and its type — is revealed as very influential in the results and accuracy of the trained model, and therefore must be well accounted.
We give a framework to normalize a regular isotopy invariant of a spatial graph, and introduce many normalizations satisfying the same relation under a local move. We normalize the Yamada polynomial for spatial embeddings of almost all trivalent graphs without a bridge, and see the benefit to utilize our normalizations from the viewpoint of skein relations, the finite type invariants, and evaluations of the Yamada polynomial. We show that the collection of the differences between two of our normalizations is a complete spatial-graph-homology invariant.
Term rewriting is a popular computational paradigm for symbolic computations such as formula manipulation, theorem proving and implementations of nonprocedural programming languages. In rewriting, the most demanding operation is repeated simplification of terms by pattern matching them against rewrite rules. We describe a parallel architecture, R2M, for accelerating this operation. R2M can operate either as a stand-alone processor using its own memory or as a backend device attached to a host using the host’s main memory. R2M uses only a fixed number (independent of input size) of processing units and fixed capacity auxiliary memory units, yet it is capable of handling variable-size rewrite rules that change during simplification. This is made possible by a simple and reconfigurable interconnection present in R2M. Finally, R2M uses a hybrid scheme that combines the ease, and efficiency of parallel pattern matching using the tree representation of terms, and the naturalness of their dag representation for replacements.
The continuity, equation relating the change in time of the position probability density to the gradient of the probability current density is generalized to PT-symmetric quantum mechanics. The normalization condition of eigenfunctions is modified in accordance with this new conservation law and illustrated with some detailed examples.
In noncommutative quantum mechanics, the energy-dependent harmonic oscillator problem is studied by solving the Schrödinger equation in polar coordinates. The presence of the noncommutativity in space coordinates and the dependence on energy for the potential yield energy-dependent mass and potential. The correction of normalization condition is calculated and the parameter-dependences of the results are studied graphically.
For a random variable x we can define a variational relationship with practical physical meaning as , where I is the uncertainty measure. With the help of a generalized definition of expectation,
, we can find the concrete forms of the maximizable entropies for any given probability distribution function, where g({pi}) may have different forms for different statistical methods which include the extensive and non-extensive statistics. Moreover, it is pointed out that this generalized uncertainty measure is valid not only for thermodynamic systems but also for non-thermodynamic systems.
Ranking of sports teams has always been significant to sponsors, coaches, as well as audiences. Prevailing prediction methods investigate probabilities by taking into account of different kinds of attributes (e.g. field goals, fields goal attempts) in order to establish a detail-based mechanism for analyzing the capability among competing teams. The different types of activation and inhibition actions between athletes provide a considerable challenge in the framework of network analysis. Moreover, these attributes interactions might add up the substantial redundancy to network frame as well. This paper proposes a weighted PageRank algorithm based on the normalized basketball match scores from a macroscopic point of view. Taking Chinese Basketball Association and Chinese University Basketball Association as examples, the developed approach takes into account the win/lose nature of interactions between each pair of competing teams in the framework of PageRank network. We also evolve a weighted network model for the network matrix which highlights the capability difference of teams whose PageRank probabilities are most sensitive with respect to the scores of the two competing teams. The chance of championship of teams can be better demonstrated by the PageRank probabilities. The results show that our method achieves more precise predicting result than that of original PageRank algorithm and Hypertext-Induced Topic Search algorithm.
Accurate measurement of poses and expressions can increase the efficiency of recognition systems by avoiding the recognition of spurious faces. This paper presents a novel and robust pose-expression invariant face recognition method in order to improve the existing face recognition techniques. First, we apply the TSL color model for detecting facial region and estimate the vector X-Y-Z of face using connected components analysis. Second, the input face is mapped by a deformable 3D facial model. Third, the mapped face is transformed to the frontal face which appropriates for face recognition by the estimated pose vector and action unit of expression. Finally, the damaged regions which occur during the process of normalization are reconstructed using PCA. Several empirical tests are used to validate the application of face detection model and the method for estimating facial poses and expression. In addition, the tests suggest that recognition rate is greatly boosted through the normalization of the poses and expression.
The unique iris pattern of each human eye is complex, but easily be scanned or captured by a camera. However, the high cost infrared iris scanners used for acquisition causes inconvenience to users by distance related constraints. This restricts its widespread use in real-time applications such as airports and banks. The images captured by cameras under visible wavelength are obstructed by the presence of reflections and shadows which requires additional attention. The main objective of this paper is to propose a secure biometric iris authentication system by fusion of RGB channel information from the real-time data captured under visible wavelength and varying light conditions. The proposed system is adapted to a real-time noisy iris dataset. The effectiveness of this proposed system was tested on two different color iris datasets, namely, a public database UBIRISv1 and a newly created database SSNDS which contains images captured with any digital/mobile camera of minimum 5MP under unconstrained environments. This system supports the cross sensor acquisition and successful iris segmentation from these unconstrained inputs. The features from each channel are extracted using log Gabor filter and a matching is performed using hamming distance based on two thresholds (inter and intra class variations). The performance quality of the proposed biometric system leads to the feasibility of a new cost-effective approach for any real-time application, which requires authentication to ensure quality service, enhance security, eliminate fraud, and maximize effectiveness.
Given a binary image of square size, it is desirable to identify the amount of shift of the foreground pixels such that it minimizes the total number of leaves of the region quadtree that represents the image. This problem is called quadtree normalization. For this problem, the best known algorithms have time complexities O(N2logN), where N is the side length of given images (so, N2 is the total number of pixels).
In this paper, we show an algorithm that has the optimal complexity O(N2) for some class of images. Our strategy consists of two stages: decomposing the given image into axis-parallel rectangles at first and integrating the contributions of individual rectangles afterwards. To do this, we derive the necessary and sufficient condition on any decomposition scheme, in a conditional form of the well-known Inclusion–Exclusion Principle. It turns out that the generated primitives must be "strictly overlapped" to some extent.
The optimal linear-time complexity can be achieved in the case when the total area of the decomposed rectangles is bounded by O(N2) e.g. for the class of images whose foreground part is drawn with the finite number of rectangles. We only sketch the outline of the first decomposition stage of the new algorithm, but the last integrating stage is described in details.
Color images depend on the color of the capture illuminant and object reflectance. As such image colors are not stable features for object recognition, however stability is necessary since perceived colors (the colors we see) are illuminant independent and do correlate with object identity. Before the colors in images can be compared, they must first be preprocessed to remove the effect of illumination. Two types of preprocessing have been proposed: first, run a color constancy algorithm or second apply an invariant normalization. In color constancy preprocessing the illuminant color is estimated and then, at a second stage, the image colors are corrected to remove color bias due to illumination. In color invariant normalization image RGBs are redescribed, in an illuminant independent way, relative to the context in which they are seen (e.g. RGBs might be divided by a local RGB average). In theory the color constancy approach is superior since it works in a scene independently: color invariant normalization can be calculated post-color constancy but the converse is not true. However, in practice color invariant normalization usually supports better indexing. In this paper we ask whether color constancy algorithms will ever deliver better indexing than color normalization. The main result of this paper is to demonstrate equivalence between color constancy and color invariant computation.
The equivalence is empirically derived based on color object recognition experiments. colorful objects are imaged under several different colors of light. To remove dependency due to illumination these images are preprocessed using either a perfect color constancy algorithm or the comprehensive color image normalization. In the perfect color constancy algorithm the illuminant is measured rather than estimated. The import of this is that the perfect color constancy algorithm can determine the actual illuminant without error and so bounds the performance of all existing and future algorithms. Post-color constancy or color normalization processing, the color content is used as cue for object recognition. Counter-intuitively perfect color constancy does not support perfect recognition. In comparison the color invariant normalization does deliver near-perfect recognition. That the color constancy approach fails implies that the scene effective illuminant is different from the measured illuminant. This explanation has merit since it is well known that color constancy is more difficult in the presence of physical processes such as fluorescence and mutual illumination. Thus, in a second experiment, image colors are corrected based on a scene dependent "effective illuminant". Here, color constancy preprocessing facilitates near-perfect recognition. Of course, if the effective light is scene dependent then optimal color constancy processing is also scene dependent and so, is equally a color invariant normalization.
Knowledge bases contain specific and general knowledge. In relational database systems, specific knowledge is often represented as a set of relations. The conventional methodologies for centralized database design can be applied to develop a normalized, redundancy-free global schema. Distributed database design involves redundancy removal as well as the distribution design which allows replicated data segments. Thus, distribution design can be viewed as a process on a normalized global schema which produces a collection of fragments of relations from a global database. Clearly, not every fragment of data can be permitted as a relation. In this paper, we clarify and formally discuss three kinds of fragmentations of relational databases, and characterize their features as valid designs, and we introduce a hybrid knowledge fragmentation as the general case. For completeness of presentation, we first show an algorithm for the validity test of vertical fragmentations of normalized relations, and then extend it to the more general case of unbiased fragmentations.
Normalization is a fundamental ring-theoretic operation; geometrically it resolves singularities in codimension one. Existing algorithmic methods for computing the normalization rely on a common recipe: successively enlarge the given ring in form of an endomorphism ring of a certain (fractional) ideal until the process becomes stationary. While Vasconcelos' method uses the dual Jacobian ideal, Grauert–Remmert-type algorithms rely on so-called test ideals. For algebraic varieties, one can apply such normalization algorithms globally, locally, or formal analytically at all points of the variety. In this paper, we relate the number of iterations for global Grauert–Remmert-type normalization algorithms to that of its local descendants. We complement our results by a study of ADE singularities. All intermediate singularities occurring in the normalization process are determined explicitly. Besides ADE singularities the process yields simple space curve singularities from the list of Frühbis-Krüger.
In the general context of presentations of monoids, we study normalization process that are determined by their restriction to length-two words. Garside’s greedy normal forms and quadratic convergent rewriting systems, in particular those associated with the plactic monoids, are typical examples. Having introduced a parameter, called the class and measuring the complexity of the normalization of length-three words, we analyze the normalization of longer words and describe a number of possible behaviors. We fully axiomatize normalizations of class (4,3), show the convergence of the associated rewriting systems, and characterize those deriving from a Garside family.
Determining whether an arbitrary subring R of k[x±11,…,x±1n] is a normal or Cohen-Macaulay domain is, in general, a nontrivial problem, even in the special case of a monomial generated domain. We provide a complete characterization of the normality, normalizations, and Serre’s R1 condition for quadratic-monomial generated domains. For a quadratic-monomial generated domain R, we develop a combinatorial structure that assigns, to each quadratic monomial of the ring, an edge in a mixed signed, directed graph G, i.e. a graph with signed edges and directed edges. We classify the normality and the normalizations of such rings in terms of a generalization of the combinatorial odd cycle condition on G. We also generalize and simplify a combinatorial classification of Serre’s R1 condition for such rings and construct non-Cohen–Macaulay rings.
This paper determines relations between two notions concerning monoids: factorability structure, introduced to simplify the bar complex; and quadratic normalization, introduced to generalize quadratic rewriting systems and normalizations arising from Garside families. Factorable monoids are characterized in the axiomatic setting of quadratic normalizations. Additionally, quadratic normalizations of class (4,3) are characterized in terms of factorability structures and a condition ensuring the termination of the associated rewriting system.
A new approach to normalizing fuzzy sets is introduced where it is assumed that the normalization method is compatible with a given t-norm. In this context it is proved that the most usual ways to normalize fuzzy subsets correspond to the most common t-norms.
For a given fuzzy subset μ, the corresponding normalized fuzzy subset can be viewed as the distribution of μ conditioned on the (degree of) existence of its elements with maximal membership. From this view point we investigate the less specific normal fuzzy subset of X among the most similar fuzzy subsets to μ and the normal fuzzy subset generating the same fuzzy T-preorder as μ.
Medical diagnosis is mostly done by experienced doctors. However, still some of the cases reported of wrong diagnosis and treatment. Patients are needed to take number of clinical tests for disease diagnosis. Most of the cases, all the tests are not contributing towards efficient diagnosis. The medical data are multidimensional and composed of thousands of independent features. So, the multidimensional database need to be analyzed and preprocessed for valuable decision making for medical diagnosis. The aim of this work is to accurately predict the medical disease with a condensed number of attributes. In this approach, the raw input dataset is preprocessed based on the common normalization approach. An association rule is used to find out the frequent used patterns to prune the dataset. Further, base rule can be applied to the pruned dataset. The Payoff and Heuristic rate can be evaluated to predict the risk analysis. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) approaches are used for better feature selection. Classification result is acquired based on minimum and maximum of residual support values. The experimental results show that the proposed scheme, can perform better than the existing algorithms to diagnose the medical disease.
Real-time simulation of ring-grid laser images using the conventional method has several drawbacks, including weak edge fitting, poor clarity, low image recognition rates, uneven brightness distribution, and significant ring-grid laser image interference from the surrounding environment. In this research, we develop the diagonalizable and readily invertible Hessian matrix, offer an enhanced Newton projection iteration algorithm, and use the degenerate gradient magnitude as the constraint set. We also develop a real-time re-illumination model appropriate for modeling ring-grid laser maps and suggest a method for simulating ring-grid laser maps based on it. The simulations demonstrate that the suggested method successfully matches ring-grid laser pictures in real time with little relative error and deviation, a high signal-to-noise ratio, and pleasing aesthetic effects.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.