Progress on classifying small index subfactors has revealed an almost empty landscape. In this paper we give some evidence that this desert continues up to index . There are two known quantum-group subfactors with index in this interval, and we show that these subfactors are the only way to realize the corresponding principal graphs. One of these subfactors is 1-supertransitive, and we demonstrate that it is the only 1-supertransitive subfactor with index between 5 and
. Computer evidence shows that any other subfactor in this interval would need to have rank at least 38. We prove our uniqueness results by showing that there is a unique flat connection on each graph. The result on 1-supertransitive subfactors is proved by an argument using intermediate subfactors, running the "odometer" from the FusionAtlas' Mathematica package and paying careful attention to dimensions. This is the published version of arXiv:1205.2742.
We define spatial LpLp AF algebras for p∈[1,∞)∖{2}, and prove the following analog of the Elliott AF algebra classification theorem. If A and B are spatial Lp AF algebras, then the following are equivalent:
As background, we develop the theory of matricial Lp operator algebras, and show that there is a unique way to make a spatial Lp AF algebra into a matricial Lp operator algebra. We also show that any countable scaled Riesz group can be realized as the scaled preordered K0-group of a spatial Lp AF algebra.
We introduce a type of zero-dimensional dynamical system (a pair consisting of a totally disconnected compact metrizable space along with a homeomorphism of that space), which we call “fiberwise essentially minimal”, that is a class that includes essentially minimal systems and systems in which every orbit is minimal. We prove that the associated crossed product C∗-algebra of such a system is an A𝕋-algebra. Under the additional assumption that the system has no periodic points, we prove that the associated crossed product C∗-algebra has real rank zero, which tells us that such C∗-algebras are classifiable by K-theory. The associated crossed product C∗-algebras to these nontrivial examples are of particular interest because they are non-simple (unlike in the minimal case).
A Bayesian network is a directed acyclic graph in which each node represents a variable and each arc a probabilistic dependency; they are used to provide: a compact form to represent the knowledge and flexible methods of reasoning. Obtaining it from data is a learning process that is divided in two steps: structural learning and parametric learning. In this paper we define an automatic learning method that optimizes the Bayesian networks applied to classification, using a hybrid method of learning that combines the advantages of the induction techniques of the decision trees (TDIDT-C4.5) with those of the Bayesian networks. The resulting method is applied to prediction in health domain.
We investigate a collection of one-parametric families of isotropic sandpile models. The models involve the square lattice slowly accumulating the grains and quickly transferring them as the local piles become over-critical. The paper groups the sand-piles with respect to two features influencing the model dynamics. They are the value of the local transfer's stochasticity and the number of the transferred grains. Every pair generates one-parametric family of the sand-piles. The parameter reflects the relative height of an over-critical pile with respect to the incoming flow of sand. If the stochasticity disappears with the growth of the parameter, the families with the fixed number of the transferred grains have much in common with Bak et al.'s sand-pile [Phys. Rev. Lett.59, 381 (1987)], while the families, whose over-critical piles lose all their grains, tend to the Zhang sand-pile [Phys. Rev. Lett.63, 470 (1989)]. The families with non-disappearing variance give rise to new properties described in terms of the probability distribution of the pile heights.
Two main difficulties in the problem of classification in partially labeled networks are the sparsity of the known labeled nodes and inconsistency of label information. To address these two difficulties, we propose a similarity-based method, where the basic assumption is that two nodes are more likely to be categorized into the same class if they are more similar. In this paper, we introduce ten similarity indices defined based on the network structure. Empirical results on the co-purchase network of political books show that the similarity-based method can, to some extent, overcome these two difficulties and give higher accurate classification than the relational neighbors method, especially when the labeled nodes are sparse. Furthermore, we find that when the information of known labeled nodes is sufficient, the indices considering only local information can perform as good as those global indices while having much lower computational complexity.
We demonstrate an automatic procedure for extracting features such as directionality of crack patterns, distribution of node distances and segment lengths, fractal dimension, entropy, and crack coverage to aid in automatic classification of painting cracks or craquelures. To test our classifier, we make use of four distinct craquelure patterns, designated by names based on their country of origin, namely: Dutch, Flemish, French or Italian. We report that selecting features based on effect size ratio from the above statistical measures, the standard linear discriminant analysis (LDA) can make predictive classification of the craquelure patterns with a 69.4% accuracy. Effect size ratio simultaneously quantifies the extent of correlation and variance of two statistical sets of data. This test set accuracy is more than two times better than mere chance classification, or the proportional chance criterion, computed to be ΦPCC = 27.61% and also twice the recommended classifier accuracy 1.25 × ΦPCC = 34.4%. We compare the result with the nonlinear method of neural network and we observe no marked improvement in the resulting accuracy. This suggests that the problem at hand with respect to the statistical features extracted is a linear classification problem. The work provides a comprehensive guide on the algorithms that can be used to extract quantitative information of crack patterns.
Current associative classification (AC) algorithms generate only the most obvious class linked with a rule in the training data set and ignore all other classes. We handle this problem by proposing a learning algorithm based on AC called Multi-label Classifiers based Associative Classification (MCAC) that learns rules associated with multiple classes from single label data. MCAC algorithm extracts classifiers from the whole training data set discovering all possible classes connected with a rule as long as they have sufficient training data representation. Another distinguishing feature of the MCAC algorithm is the classifier building method that cuts down the number of rules treating one known problem in AC mining which is the exponential growth of rules. Experimentations using real application data related to a complex scheduling problem known as the trainer timetabling problem reveal that MCAC's predictive accuracy is highly competitive if contrasted with known AC algorithms.
There has been a growing interest from academia and industry in developing circuits and systems for edge computing and quality control tasks in food production lines, where image-processing is frequently required. This paper outlines the required considerations for designing a fruit classification system based on image-processing using Cellular Automata (CA) models and integrating it into reconfigurable hardware (HW) such as Field Programmable Gate Arrays (FPGAs). Parallel processing in CA requires numerous processing elements to be implemented and mapping CA models to HW generally comes with limitations. Homogeneous CA arrays are easier to design and implement in HW but can be resource-demanding. To fill this gap, this study explores different alternatives for the HW implementation of CA models, particularly trading computational-parallelism for a more optimized use of the available HW resources. We conducted experimental tests of the designed HW system using the Digilent Nexys development board, and the operation was validated against software-based benchmarks for image-processing, particularly concerning edge-detection. The presented study provides a broader range of design solutions for the HW implementation of two-dimensional CA models and a better understanding of their advantages and disadvantages. The results show that solutions focusing on instruction-parallelism add some complexity to the conception and require more design effort, compared to homogeneous CA models composed of identical cells. However, the instruction-parallel design solutions can significantly improve the HW resource utilization, especially when implementing computationally intensive CA rules in FPGAs.
Based on the theory of constitution of Traditional Chinese Medicine (TCM), the human population can be classified into nine constitutions including a balanced constitution and eight unbalanced constitutions (Yang-deficient, Yin-deficient, Qi-deficient, Phlegm-wetness, Wetness-heat, Stagnant blood, Depressed, and Inherited special constitutions). Generally, unbalanced constitutions are more susceptible to certain diseases than balanced constitutions. However, whether such constitution classification has modern genetic and biochemical basis is poorly understood. Here we examined gene expression profiles in peripheral white blood cells from eight individuals with Yang-deficient constitutions and six individuals with balanced constitutions using Affymetrix U133 plus 2.0 expression array. Based on a q < 0.05 and fold-change ≥ 2 cutoff, we have identified that 785 genes are up-regulated and 954 genes are down-regulated in Yang-deficient constitution compared to a balanced constitution. Importantly, we found that the expression of thyroid hormone receptor beta (TRβ) and several key nuclear receptor coactivators including steroid receptor coactivator 1 (SRC1), steroid receptor coactivator 3 (SRC3), cAMP-response element-binding protein (CREB) binding protein (CBP) and Mediator is significantly decreased. Such decreased expression of TR transcription complex may lead to impaired thermogenesis, providing a molecular explanation of the main symptom associated with Yang-deficient constitution, cold intolerance. Future studies are needed to validate these gene expression changes in additional populations and address the underlying mechanisms for differential gene expression.
We evaluate the Arrows Classification Method (ACM) for grouping objects based on the similarity of their data. This is a new method, which aims to achieve a balance between the conflicting objectives of maximizing internal cohesion and external isolation in the output groups. The method is widely applicable, especially in simulation input and output modelling, and has previously been used for grouping machines on an assembly line, based on data on time-to-repair; and hospital procedures, based on length-of-stay data. The similarity of the data from a pair of objects is measured using the two-sample Cramér-von-Mises goodness of fit statistic, with bootstrapping employed to find the significance or p-value of the calculated statistic. The p-values coming from the paired comparisons serve as inputs to the ACM, and allow the objects to be classified such that no pair of objects that are grouped together have significantly different data. In this article, we give the technical details of the method and evaluate its use through testing with specially generated samples. We will also demonstrate its practical application with two real examples.
Key issues and essential features of classical and quantum strings in gravitational plane waves, shock waves and space–time singularities are synthetically understood. This includes the string mass and mode number excitations, energy–momentum tensor, scattering amplitudes, vacuum polarization and wave-string polarization effect. The role of the real pole singularities characteristic of the tree level string spectrum (real mass resonances) and that of the space–time singularities is clearly exhibited. This throws light on the issue of singularities in string theory which can be thus classified and fully physically characterized in two different sets: strong singularities (poles of order ≥ 2, and black holes) where the string motion is collective and nonoscillating in time, outgoing states and scattering sector do not appear, the string does not cross the singularities; and weak singularities (poles of order < 2, (Dirac δ belongs to this class) and conic/orbifold singularities) where the whole string motion is oscillatory in time, outgoing and scattering states exist, and the string crosses the singularities.
Common features of strings in singular wave backgrounds and in inflationary backgrounds are explicitly exhibited.
The string dynamics and the scattering/excitation through the singularities (whatever their kind: strong or weak) is fully physically consistent and meaningful.
A new morphological boundary detection approach is used to separate the signal from the background in the Standard Model Higgs boson search at LHC. Based on mathematical concepts, this method consists of a fast computation of probabilistic density functions of events and a smoothing using a combination of dilatation and erosion operators. In a binary search approach, the performances are improved and the results compare favourably with other multivariate analysis.
In particle physics, search for signals of new particles in the proton–proton collisions is an ongoing effort. The energies and luminosities have reached a level where new search techniques are becoming a necessity. In this work, we develop a search technique for light-charged Higgs boson (nearly degenerate with W-boson), which is extremely hard to do with the traditional cut-based methods. To this end, we employ a deep anomaly detection approach to extract the signal (light-charged Higgs particle) from the vast W-boson background. We construct a Deviation Network (DevNet) to directly obtain anomaly scores used to identify signal events using background data and few labeled signal data. Our results show that DevNet is able to find regions of high efficiency and gives better performance than the autoencoders, the classic semi-supervised anomaly detection method. It shows that employing Deviation Networks in particle physics can provide a distinct and powerful approach to search for new particles.
We examine first integrals and linearization methods of the second-order ordinary differential equation which is called fin equation in this study. Fin is heat exchange surfaces which are used widely in industry. We analyze symmetry classification with respect to different choices of thermal conductivity and heat transfer coefficient functions of fin equation. Finally, we apply nonlocal transformation to fin equation and examine the results for different functions.
It is of great significance to identify the characteristics of time series to quantify their similarity and classify different classes of time series. We define six types of triadic time-series motifs and investigate the motif occurrence profiles extracted from the time series. Based on triadic time series motif profiles, we further propose to estimate the similarity coefficients between different time series and classify these time series with high accuracy. We validate the method with time series generated from nonlinear dynamic systems (logistic map, chaotic logistic map, chaotic Henon map, chaotic Ikeda map, hyperchaotic generalized Henon map and hyperchaotic folded-tower map) and retrieved from the UCR Time Series Classification Archive. Our analysis shows that the proposed triadic time series motif analysis performs better than the classic dynamic time wrapping method in classifying time series for certain datasets investigated in this work.
With the change in people’s lifestyle and travel mode, understanding the individual and population mobility patterns in urban areas remains to an outstanding problem. Pervasive mobile communication technologies generate voluminous data related to human mobility, such as mobile phone data. To further study the characteristics of returning and exploration patterns of human movement in urban space, a multi-index model is proposed based on the original radius of the gyration index. In this paper, the classification mechanism of a single ratio of the radius of gyration for k-explorers and k-returners is illustrated. Some disadvantages of this mechanism are noted. A few indices of the model are proposed for deep mining of data on human mobility exploration and returning characteristics. Taking a mobile phone data during an entire month as a sample, and after data processing on the Spark platform, the characteristics of various indicators and their correlations are analyzed. The classification effects of different spatial indices for human exploration and returning are compared by using a support vector machine and the binary classification algorithm and are further compared with existing research results. The differences in the classification effects of these indicators are analyzed, which is helpful for in-depth studies of urban mobility patterns.
Face recognition is a vastly researched topic in the field of computer vision. A lot of work have been done for facial recognition in two dimensions and three dimensions. The amount of work done with face recognition invariant of image processing attacks is very limited. This paper presents a total of three classes of image processing attacks on face recognition system, namely image enhancement attacks, geometric attacks and the image noise attacks. The well-known machine learning techniques have been used to train and test the face recognition system using two different databases namely Bosphorus Database and University of Milano Bicocca three-dimensional (3D) Face Database (UMBDB). Three classes of classification models, namely discriminant analysis, support vector machine and k-nearest neighbor along with ensemble techniques have been implemented. The significance of machine learning techniques has been mentioned. The visual verification has been done with multiple image processing attacks.
Machine learning (ML) represents the automated extraction of models (or patterns) from data. All ML techniques start with data. These data describe the desired relationship between the ML model inputs and outputs, the latter of which may be implicit for unsupervised approaches. Equivalently, these data encode the requirements we wish to be embodied in our ML model. Thereafter, the model selection comes in action, to select an efficient ML model. In this paper, we have focused on various ML models which are the extensions of the well-known ML model, i.e. Support vector machines (SVMs). The main objective of this paper is to compare the existing ML models with the variants of SVM. Limitations of the existing techniques including the variants of SVM are then drawn. Finally, future directions are presented.
This paper designs a novel classification hardware framework based on neural network (NN). It utilizes COordinate Rotation DIgital Computer (CORDIC) algorithm to implement the activation function of NNs. The training was performed through software using an error back-propagation algorithm (EBPA) implemented in C++, then the final weights were loaded to the implemented hardware framework to perform classification. The hardware framework is developed in Xilinx 9.2i environment using VHDL as programming languages. Classification tests are performed on benchmark datasets obtained from UCI machine learning data repository. The results are compared with competitive classification approaches by considering the same datasets. Extensive analysis reveals that the proposed hardware framework provides more efficient results as compared to the existing classifiers.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.