The motion of a pitching airfoil at low Reynolds number and high angle of attack involves nonlinear dynamics and unstable characteristics, making it a fascinating area for research. In this study, the precise modeling of this complex fluid flow phenomenon is addressed. By combining manifold learning and time delay embedding theory, stable nonlinear low-dimensional manifolds are successfully extracted from the complex high-dimensional spatiotemporal data of the fluid system, simplifying the representation of intricate physical behaviors in space. Subsequently, utilizing the Least Absolute Shrinkage and Selection Operator (LASSO) model, we establish a set of ordinary differential equations governing the flow field, thereby automatically detecting and simplifying nonlinear terms to enhance understanding of the essence of the flow field. This research provides an efficient and accurate approach for the analysis and control of nonlinear dynamic systems.
Medical diagnosis can often be understood as a classification problem. In oncology, this typically involves differentiating between tumour types and grades, or some type of discrete outcome prediction. From the viewpoint of computer-based medical decision support, this classification requires the availability of accurate diagnoses of past cases as training target examples. The availability of such labeled databases is scarce in most areas of oncology, and especially so in neuro-oncology. In such context, semi-supervised learning oriented towards classification can be a sensible data modeling choice. In this study, semi-supervised variants of Generative Topographic Mapping, a model of the manifold learning family, are applied to two neuro-oncology problems: the diagnostic discrimination between different brain tumour pathologies, and the prediction of outcomes for a specific type of aggressive brain tumours. Their performance compared favorably with those of the alternative Laplacian Eigenmaps and Semi-Supervised SVM for Manifold Learning models in most of the experiments.
In this paper, we address the problems of modeling the acoustic space generated by a full-spectrum sound source and using the learned model for the localization and separation of multiple sources that simultaneously emit sparse-spectrum sounds. We lay theoretical and methodological grounds in order to introduce the binaural manifold paradigm. We perform an in-depth study of the latent low-dimensional structure of the high-dimensional interaural spectral data, based on a corpus recorded with a human-like audiomotor robot head. A nonlinear dimensionality reduction technique is used to show that these data lie on a two-dimensional (2D) smooth manifold parameterized by the motor states of the listener, or equivalently, the sound-source directions. We propose a probabilistic piecewise affine mapping model (PPAM) specifically designed to deal with high-dimensional data exhibiting an intrinsic piecewise linear structure. We derive a closed-form expectation-maximization (EM) procedure for estimating the model parameters, followed by Bayes inversion for obtaining the full posterior density function of a sound-source direction. We extend this solution to deal with missing data and redundancy in real-world spectrograms, and hence for 2D localization of natural sound sources such as speech. We further generalize the model to the challenging case of multiple sound sources and we propose a variational EM framework. The associated algorithm, referred to as variational EM for source separation and localization (VESSL) yields a Bayesian estimation of the 2D locations and time-frequency masks of all the sources. Comparisons of the proposed approach with several existing methods reveal that the combination of acoustic-space learning with Bayesian inference enables our method to outperform state-of-the-art methods.
This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet–Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.
Gradient descent, or negative gradient flow, is a standard technique in optimization to find the minima of functions. Many implementations of gradient descent rely on discretized versions, i.e. moving in the gradient direction for a set step size, recomputing the gradient and continuing. In this paper, we present an approach to manifold learning where gradient descent takes place in the infinite-dimensional space ℰ=Emb(M,ℝN) of smooth embeddings ϕ of a manifold M into ℝN. Implementing a discretized version of gradient descent for P:ℰ→ℝ, a penalty function that scores an embedding ϕ∈ℰ, requires estimating how far we can move in a fixed direction — the direction of one gradient step — before leaving the space of smooth embeddings. Our main result is to give an explicit lower bound for this step length in terms of the Riemannian geometry of ϕ(M). In particular, we consider the case when the gradient of P is pointwise normal to the embedded manifold ϕ(M). We prove this case arises when P is invariant under diffeomorphisms of M, a natural condition in manifold learning.
Manifold learning has been demonstrated as an effective way to represent intrinsic geometrical structure of samples. In this paper, a new manifold learning approach, named Local Coordinates Alignment (LCA), is developed based on the alignment technique. LCA first obtains local coordinates as representations of local neighborhood by preserving proximity relations on a patch, which is Euclidean. Then, these extracted local coordinates are aligned to yield the global embeddings. To solve the out of sample problem, linearization of LCA (LLCA) is proposed. In addition, in order to solve the non-Euclidean problem in real world data when building the locality, kernel techniques are utilized to represent similarity of the pairwise points on a local patch. Empirical studies on both synthetic data and face image sets show effectiveness of the developed approaches.
Manifold learning is an effective dimension reduction method to extract nonlinear structures from high dimensional data. Recently, manifold learning has received much attention within the research communities of image analysis, computer vision and document data analysis. In this paper, we propose a boosted manifold learning algorithm towards automatic 2D face recognition by using AdaBoost to select the best possible discriminating projection for manifold learning to exploit the strength of both techniques. Experimental results support that the proposed algorithm improves over existing benchmarks in terms of stability and recognition precision rates.
Many problems in pattern classification and feature extraction involve dimensionality reduction as a necessary processing. Traditional manifold learning algorithms, such as ISOMAP, LLE, and Laplacian Eigenmap, seek the low-dimensional manifold in an unsupervised way, while the local discriminant analysis methods identify the underlying supervised submanifold structures. In addition, it has been well-known that the intraclass null subspace contains the most discriminative information if the original data exist in a high-dimensional space. In this paper, we seek for the local null space in accordance with the null space LDA (NLDA) approach and reveal that its computational expense mainly depends on the quantity of connected edges in graphs, which may be still unacceptable if a great deal of samples are involved. To address this limitation, an improved local null space algorithm is proposed to employ the penalty subspace to approximate the local discriminant subspace. Compared with the traditional approach, the proposed method can achieve more efficiency so that the overload problem is avoided, while slight discriminant power is lost theoretically. A comparative study on classification shows that the performance of the approximative algorithm is quite close to the genuine one.
Locally linear embedding is often invalid for sparse data sets because locally linear embedding simply takes the reconstruction weights obtained from the data space as the weights of the embedding space. This paper proposes an improved method for sparse data sets, a united locally linear embedding, to make the reconstruction more robust to sparse data sets. In the proposed method, the neighborhood correlation matrix presenting the position information of the points constructed from the embedding space is added to the correlation matrix in the original space, thus the reconstruction weights can be adjusted. As the reconstruction weights adjusted gradually, the position information of sparse points can also be changed continually and the local geometry of the data manifolds in the embedding space can be well preserved. Experimental results on both synthetic and real-world data show that the proposed approach is very robust against sparse data sets.
An improved manifold learning method, called Uncorrelated Local Fisher Discriminant Analysis (ULFDA), for face recognition is proposed. Motivated by the fact that statistically uncorrelated features are desirable for dimension reduction, we propose a new difference-based optimization objective function to seek a feature submanifold such that the within-manifold scatter is minimized, and between-manifold scatter is maximized simultaneously in the embedding space. We impose an appropriate constraint to make the extracted features statistically uncorrelated. The uncorrelated discriminant method has an analytic global optimal solution, and it can be computed based on eigen decomposition. As a result, the proposed algorithm not only derives the optimal and lossless discriminative information, but also guarantees that all extracted features are statistically uncorrelated. Experiments on synthetic data and AT&T, extended YaleB and CMU PIE face databases are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of the proposed method.
Locally linear embedding (LLE) depends on the Euclidean distance (ED) to select the k-nearest neighbors. However, the ED may not reflect the actual geometry structure of data, which may lead to the selection of ineffective neighbors. The aim of our work is to make full use of the local spectral angle (LSA) to find proper neighbors for dimensionality reduction (DR) and classification of hyperspectral remote sensing data. At first, we propose an improved LLE method, called local spectral angle LLE (LSA-LLE), for DR. It uses the ED of data to obtain large-scale neighbors, then utilizes the spectral angle to get the exact neighbors in the large-scale neighbors. Furthermore, a local spectral angle-based nearest neighbor classifier (LSANN) has been proposed for classification. Experiments on two hyperspectral image data sets demonstrate the effectiveness of the presented methods.
Recently, many dimensionality reduction (DR) algorithms have been developed, which are successfully applied to feature extraction and representation in pattern classification. However, many applications need to re-project the features to the original space. Unfortunately, most DR algorithms cannot perform reconstruction. Based on the manifold assumption, this paper proposes a General Manifold Reconstruction Framework (GMRF) to perform the reconstruction of the original data from the low dimensional DR results. Comparing with the existing reconstruction algorithms, the framework has two significant advantages. First, the proposed framework is independent of DR algorithm. That is to say, no matter what DR algorithm is used, the framework can recover the structure of the original data from the DR results. Second, the framework is space saving, which means it does not need to store any training sample after training. The storage space GMRF needed for reconstruction is far less than that of the training samples. Experiments on different dataset demonstrate that the framework performs well in the reconstruction.
In manifold learning, the neighborhood is often called a patch of the manifold, and the corresponding open set is called the local coordinate of the patch. The so-called alignment is to align the local coordinates in the d-dimensional Euclidean space to get the global coordinate of the manifold. There are two kinds of alignment methods: global and progressive alignment methods. The global alignment methods align the local coordinates of the manifold all at one time by solving an eigenvalue problem. The progressive alignment methods often take the local coordinate of a patch as the basic local coordinate and then attach other local ordinates to the basic local coordinate patch-by-patch until the basic local coordinate evolves into the global coordinate of the manifold. In this paper, a new progressive alignment method is proposed, where only the local coordinates of the two patches with the largest intersection at the current stage of progressive alignment will be aligned into a larger local coordinate. It is inspired by the famous Huffman coding, where two random events with the smallest probabilities at the current phase will be merged into a random event with a larger probability. Therefore, the proposed method is a Huffman-like alignment method. The experiments on benchmark data show that the proposed method outperforms both the global alignment methods and the other progressive alignment methods and is more robust to the changes of data size. The experiments on real-world data show the feasibility of the proposed method.
Facial expressions convey personal characteristics and subtle emotional states. This paper presents a new framework for modeling subtle facial motions of different people with different types of expressions from high-resolution facial expression tracking data to synthesize new stylized subtle facial expressions. A conceptual facial motion manifold is used for a unified representation of facial motion dynamics from three-dimensional (3D) high-resolution facial motions as well as from two-dimensional (2D) low-resolution facial motions. Variant subtle facial motions in different people with different expressions are modeled by nonlinear mappings from the embedded conceptual manifold to input facial motions using empirical kernel maps. We represent facial expressions by a factorized nonlinear generative model, which decomposes expression style factors and expression type factors from different people with multiple expressions. We also provide a mechanism to control the high-resolution facial motion model from low-resolution facial video sequence tracking and analysis. Using the decomposable generative model with a common motion manifold embedding, we can estimate parameters to control 3D high resolution facial expressions from 2D tracking results, which allows performance-driven control of high-resolution facial expressions.
We propose an effective outlier detection algorithm for high-dimensional data. We consider manifold models of data as is typically assumed in dimensionality reduction/manifold learning. Namely, we consider a noisy data set sampled from a low-dimensional manifold in a high-dimensional data space. Our algorithm uses local geometric structure to determine inliers, from which the outliers are identified. The algorithm is applicable to both linear and nonlinear models of data. We also discuss various implementation issues and we present several examples to demonstrate the effectiveness of the new approach.
Image recognition and feature extraction play an important role in precision agriculture. In this paper, a manifold learning algorithm was used for dimension reduction of gray and RGB color images. To clarify the boundaries of disease spots and leaf background, three clustering algorithms were applied in experiments to realize clearer maize leaf disease images. Locally, linear embedding (LLE) and Gustafson–Kessel (GK) algorithms were selected to realize image feature extraction. It was shown that the recognition rate of feature extraction for gray and color images were 95% and 99%, respectively.
We confirm by the multi-Gaussian support vector machine (SVM) classification that the information of the intrinsic dimension of Riemannian manifolds can be used to illustrate the efficiency (learning rates) of learning algorithms. We study an approximation scheme realized by convolution operators involving the Gaussian kernels with flexible variances. The essential analysis lies in the study of its approximation order in Lp (1 ≤ p < ∞) norm as the variance of the Gaussian tends to zero. It is different from the analysis for approximation in C(X) since pointwise estimations do not work any more. The Lp approximation arises from the SVM case where the approximated function is the Bayes rule and is not continuous, in general. The approximation error is estimated by imposing a regularity condition that the approximated function lies in some interpolation spaces. Then, the learning rates for multi-Gaussian regularized classifiers with general classification loss functions are derived, and the rates depend on the intrinsic dimension of the Riemannian manifold instead of the dimension of the underlying Euclidean space. Here, the input space is assumed to be a connected compact C∞ Riemannian submanifold of ℝn. The uniform normal neighborhoods of the Riemannian manifold and the radial basis form of Gaussian kernels play an important role.
In human body pose estimation, manifold learning has been considered as a useful method with regard to reducing the dimension of 2D images and 3D body configuration data. Most commonly, body pose is estimated from silhouettes derived from images or image sequences. A major problem in applying manifold estimation to pose estimation is its vulnerability to silhouette variation caused by changes of factors such as viewpoint, person, and distance.
In this paper, we propose a novel approach that combines three separate manifolds for viewpoint, pose, and 3D body configuration focusing on the problem of viewpoint-induced silhouette variation. The biased manifold learning is used to learn these manifolds with appropriately weighted distances. The proposed method requires four mapping functions that are learned by a generalized regression neural network for robustness. Despite the use of only three manifolds, experimental results show that the proposed method can reliably estimate 3D body poses from 2D images with all learned viewpoints.
The high dimensionality and heterogeneity of the hyperspectral image (HSI) make a challenge to the application of machine learning methods, such as sparse subspace clustering (SSC). SSC is designed to represent data as an union of affine subspaces, while it cannot capture the latent structure of the given data. In Mosers theory, the distribution can represent the intrinsic structure efficiently. Hence, we propose a novel approach called spatial distribution preserving-based sparse subspace clustering (SSC-SDP) in this paper for HSI data, which can help sparse representation preserve the underlying manifold structure. Specifically, the density constraint is added by minimizing the inconsistency of the densities estimated in the HSI data and the corresponding sparse coefficient matrix. In addition, we incorporate spatial information into the density estimation of the original data, and the optimization solution based on alternating direction method of multipliers (ADMM) is devised. Three HSI data sets are conducted to evaluate the performance of our SSC-SDP compared with other state-of-art algorithms.
Despite the great advances in the field of image classification, the association of ideal approaches that can bring improved results, considering different datasets, is still an open challenge. In this work, a novel approach is presented, based on a combination of compared strategies: feature extraction for early fusion; rankings based on manifold learning for late fusion; and feature augmentation applied in a long short-term memory (LSTM) algorithm. The proposed method aims to investigate the effect of feature fusion (early fusion) and ranking fusion (late fusion) in the final results of image classification. The experimental results showed that the proposed strategies improved the accuracy of results in different tested datasets (such as CIFAR10, Stanford Dogs, Linnaeus 5, Flowers 102, and Flowers 17) using a fusion of features from three convolutional neural networks (CNNs) (ResNet152, VGG16, and DPN92) and its respective generated rankings. The results indicated significant improvements and showed the potential of the approach proposed for image classification.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.