Face recognition is a challenging problem in computer vision and artificial intelligence. One of the main challenges consists in establishing a low-dimensional feature representation of the images having enough discriminatory power to perform high accuracy classification. Different methods of supervised and unsupervised classification can be found in the literature, but few numerical comparisons among them have been performed on the same computing platform. In this paper, we perform this kind of comparison, revisiting the main spectral decomposition methods for face recognition. We also introduce for the first time, the use of the noncentered PCA and the 2D discrete Chebyshev transform for biometric applications. Faces are represented by their spectral features, that is, their projections onto the different spectral basis. Classification is performed using different norms and/or the cosine defined by the Euclidean scalar product in the space of spectral attributes. Although this constitutes a simple algorithm of unsupervised classification, several important conclusions arise from this analysis: (1) All the spectral methods provide approximately the same accuracy when they are used with the same energy cutoff. This is an important conclusion since many publications try to promote one specific spectral method with respect to other methods. Nevertheless, there exist small variations on the highest median accuracy rates: PCA, 2DPCA and DWT perform better in this case. Also all the covariance-free spectral decomposition techniques based on single images (DCT, DST, DCHT, DWT, DWHT, DHT) are very interesting since they provide high accuracies and are not computationally expensive compared to covariance-based techniques. (2) The use of local spectral features generally provide higher accuracies than global features for the spectral methods which use the whole training database (PCA, NPCA, 2DPCA, Fisher's LDA, ICA). For the methods based on orthogonal transformations of single images, global features calculated using the whole size of the images appear to perform better. (3) The distance criterion generally provides a higher accuracy than the cosine criterion. The use of other p-norms (p > 2) provides similar results to the Euclidean norm, nevertheless some methods perform better. (4) No spectral method can provide 100% accuracy by itself. Therefore, other kind of attributes and supervised learning algorithms are needed. These results are coherent for the ORL and FERET databases. Finally, although this comparison has been performed for the face recognition problem, it could be generalized to other biometric authentication problems.
Over the decades, permanent magnet synchronous motor (PMSM) has been widely used in coal mine production. In this paper, an optimized neural network predictive controller (NNPC) of permanent magnet direct drive belt conveyor system (BCS) for mining based on reduced order model (ROM) is established. First, in order to establish the full order model of the permanent magnet direct drive BCS, CEMA is used for dynamic analysis, and the dynamic equation of the permanent magnet direct drive BCS is established. Second, the Proper Orthogonal Decomposition (POD) method is used to reduce the order in this paper. Finally, the NNPC of permanent magnet direct drive BCS based on the ROM is proposed. The simulation result shows that the order of BCS model is effectively reduced by the POD method. The NNPC based on the ROM has a good performance in the control of permanent magnet direct drive BCS, and the error between of the full order model and the ROM is 0.19m/s.
The lumped capacitance model, which ignores the existence of wire resistance, has been traditionally used to estimate the charging and discharging power consumption of CMOS circuits. We show that this model is not correct by pointing out that MOSFETs consume only part of the energy supplied by the source. During this study, it was revealed that about 20% of the power is consumed in the wire resistance of the buffered global interconnect, when the interconnect is modeled with RC tree networks. The percentage goes up to 30 when RLC model is used indicating the importance of inductance in interconnect model for power estimation. For RLC networks, we propose a compact yet very accurate power estimation method based on a model reduction technique.
In recent years, model order reduction (MOR) of interconnect system has become an important technique to reduce the computation complexity and improve the verification efficiency in the nanometer VLSI design. The Krylov subspaces techniques in existing MOR methods are efficient, and have become the methods of choice for generating small-scale macro-models of the large-scale multi-port RCL networks that arise in VLSI interconnect analysis. Although the Krylov subspace projection-based MOR methods have been widely studied over the past decade in the electrical computer-aided design community, all of them do not provide a best optimal solution in a given order. In this paper, a minimum norm least-squares solution for MOR by Krylov subspace methods is proposed. The method is based on generalized inverse (or pseudo-inverse) theory. This enables a new criterion for MOR-based Krylov subspace projection methods. Two numerical examples are used to test the PRIMA method based on the method proposed in this paper as a standard model.
Many of the tools of dynamical systems and control theory have gone largely unused for fluids, because the governing equations are so dynamically complex, both high-dimensional and nonlinear. Model reduction involves finding low-dimensional models that approximate the full high-dimensional dynamics. This paper compares three different methods of model reduction: proper orthogonal decomposition (POD), balanced truncation, and a method called balanced POD. Balanced truncation produces better reduced-order models than POD, but is not computationally tractable for very large systems. Balanced POD is a tractable method for computing approximate balanced truncations, that has computational cost similar to that of POD. The method presented here is a variation of existing methods using empirical Gramians, and the main contributions of the present paper are a version of the method of snapshots that allows one to compute balancing transformations directly, without separate reduction of the Gramians; and an output projection method, which allows tractable computation even when the number of outputs is large. The output projection method requires minimal additional computation, and has a priori error bounds that can guide the choice of rank of the projection. Connections between POD and balanced truncation are also illuminated: in particular, balanced truncation may be viewed as POD of a particular dataset, using the observability Gramian as an inner product. The three methods are illustrated on a numerical example, the linearized flow in a plane channel.
In a broad sense, model reduction means producing a low-dimensional dynamical system that replicates either approximately, or more strictly, exactly and topologically, the output of a dynamical system. Model reduction has an important role in the study of dynamical systems and also with engineering problems. In many cases, there exists a good low-dimensional model for even very high-dimensional systems, even infinite dimensional systems in the case of a PDE with a low-dimensional attractor. The theory of global attractors approaches these issues analytically, and focuses on finding (depending on the question at hand), a slow-manifold, inertial manifold, or center manifold, on which a restricted dynamical system represents the interesting behavior of the dynamical system; the main issue depends on defining a stable invariant manifold in which the dynamical system is invariant. These approaches are analytical in nature, however, and are therefore not always appropriate for dynamical systems known only empirically through a dataset. Empirically, the collection of tools available are much more restricted, and are essentially linear in nature. Usually variants of Galerkin's method, project the dynamical system onto a function linear subspace spanned by modes of some chosen spanning set. Even the popular Karhunen–Loeve decomposition, or POD, method is exactly such a method. As such, it is forced to either make severe errors in the case that the invariant space is intrinsically a highly nonlinear manifold, or bypass low-dimensionality by retaining many modes in order to capture the manifold. In this work, we present a method of modeling a low-dimensional nonlinear manifold known only through the dataset. The manifold is modeled as a discrete graph structure. Intrinsic manifold coordinates will be found specifically through the ISOMAP algorithm recently developed in the Machine Learning community originally for purposes of image recognition.
A method for finding reduced-order approximations of turbulent flow models is presented. The method preserves bounds on the production of turbulent energy in the sense of the norm of perturbations from a notional laminar profile. This is achieved by decomposing the Navier–Stokes system into a feedback arrangement between the linearized system and the remaining, normally neglected, nonlinear part. The linear system is reduced using a method similar to balanced truncation, but preserving bounds on the supply rate. The method involves balancing two algebraic Riccati equations. The bounds are then used to derive bounds on the turbulent energy production. An example of the application of the procedure to flow through a long straight pipe is presented. Comparison shows that the new method approximates the supply rate at least as well as, or better than, canonical balanced truncation.
Many biological and geological systems can be modeled as porous media with small inclusions. Vascularized tissue, roots embedded in soil or fractured rocks are examples of such systems. In these applications, tissue, soil or rocks are considered to be porous media, while blood vessels, roots or fractures form small inclusions. To model flow processes in thin inclusions, one-dimensional (1D) models of Darcy- or Poiseuille type have been used, whereas Darcy-equations of higher dimension have been considered for the flow processes within the porous matrix. A coupling between flow in the porous matrix and the inclusions can be achieved by setting suitable source terms for the corresponding models, where the source term of the higher-dimensional model is concentrated on the center lines of the inclusions. In this paper, we investigate an alternative coupling scheme. Here, the source term lives on the boundary of the inclusions. By doing so, we lift the dimension by one and thus increase the regularity of the solution. We show that this model can be derived from a full-dimensional model and the occurring modeling errors are estimated. Furthermore, we prove the well-posedness of the variational formulation and discuss the convergence behavior of standard finite element methods with respect to this model. Our theoretical results are confirmed by numerical tests. Finally, we demonstrate how the new coupling concept can be used to simulate stationary flow through a capillary network embedded in a biological tissue.
Many physical problems involving heterogeneous spatial scales, such as the flow through fractured porous media, the study of fiber-reinforced materials, or the modeling of the small circulation in living tissues — just to mention a few examples — can be described as coupled partial differential equations defined in domains of heterogeneous dimensions that are embedded into each other. This formulation is a consequence of geometric model reduction techniques that transform the original problems defined in complex three-dimensional domains into more tractable ones. The definition and the approximation of coupling operators suitable for this class of problems is still a challenge. We develop a general mathematical framework for the analysis and the approximation of partial differential equations coupled by non-matching constraints across different dimensions, focusing on their enforcement using Lagrange multipliers. In this context, we address in abstract and general terms the well-posedness, stability, and robustness of the problem with respect to the smallest characteristic length of the embedded domain. We also address the numerical approximation of the problem and we discuss the infsup stability of the proposed numerical scheme for some representative configuration of the embedded domain. The main message of this work is twofold: from the standpoint of the theory of mixed-dimensional problems, we provide general and abstract mathematical tools to formulate coupled problems across dimensions. From the practical standpoint of the numerical approximation, we show the interplay between the mesh characteristic size, the dimension of the Lagrange multiplier space, and the size of the inclusion in representative configurations interesting for applications. The latter analysis is complemented with illustrative numerical examples.
The two-sided second-order Arnoldi algorithm is used to generate a reduced order model of two test cases of fully coupled, acoustic interior cavities, backed by flexible structural systems with damping. The reduced order model is obtained by applying a Galerkin–Petrov projection of the coupled system matrices, from a higher dimensional subspace to a lower dimensional subspace, whilst preserving the low frequency moments of the coupled system. The basis vectors for projection are computed efficiently using a two-sided second-order Arnoldi algorithm, which generates an orthogonal basis for the second-order Krylov subspace containing moments of the original higher dimensional system. The first model is an ABAQUS benchmark problem: a 2D, point loaded, water filled cavity. The second model is a cylindrical air-filled cavity, with clamped ends and a load normal to its curved surface. The computational efficiency, error and convergence are analyzed, and the two-sided second-order Arnoldi method shows better efficiency and performance than the one-sided Arnoldi technique, whilst also preserving the second-order structure of the original problem.
This paper proposes to use the method of principal components to reduce the dimensionality of input space and the B-splines to represent the membership functions of input variables. A model reduction strategy, which is based on Johansen’s optimality theorem, also suggested. The utility of this approach is illustrated using a fuzzy system modeling example.
Vibration-based global damage detection based on updating of finite element (FE) model by targeting the modal measurements is a significant area of interest in structural health monitoring (SHM). In a typical modal testing setup, the measured mode shapes have missing components against various degrees of freedom (DOFs) due to the limitation in the number of sensors available. In this context, a novel Gibbs sampling approach is proposed for updating of FE model incorporating model reduction (MR) to facilitate the global-level detection of structural damages from incomplete modal measurements. In addition to the ease with similar sizes of analytical and experimental mode shapes, the proposed Gibbs sampling approach (for updating the reduced order FE model in the Bayesian framework) has some important advantages like: (A) no need for consideration of system mode shapes as parameters (unlike needed in the typical Gibbs sampling approach) thereby having a significant reduction in the number of parameters, (B) non-requirement of mode matching with consequent reduction in computation time to a significant extent. A generalized formulation is presented in this work providing the scope for incorporating measurements from multiple sensor setups. Moreover, formulations are adapted to incorporate multiple sets of data/measurements from each setup targeting the epistemic uncertainty. Finally, validation is carried out with both numerical (truss structure and building structure) and experimental (laboratory building structure) exercises in comparison with the typical Gibbs sampling approach having a full-sized model. The proposed approach is observed to be evolved as a computationally efficient technique with satisfactory performance in FE model updating and global damage detection.
This paper proposes a novel structural damage identification approach coupling the Mayfly algorithm (MA) with static displacement-based response surface (RS). Firstly, a hybrid optimal objective function is established that simultaneously considers the sensitivity-based residual errors of static damage identification equation and the static displacement residual. In the objective function, the static damage identification equation is addressed by the Tikhonov regularization technique. The MA is subsequently employed to conduct an optimal search and pinpoint the location and intensity of damages at the structural element level. To handle the inconformity of the static loading points and the measurement points of displacements, the model reduction and displacement extension techniques are implemented to reconstruct the static damage identification equation. Meanwhile, the static displacement-based RS is constructed to calculate the displacement residual in the hybrid objective function, thereby circumventing the time-consuming finite element calculations and improving computational efficiency. The identification results of the numerical box girder bridge demonstrate that the proposed method outperforms the particle swarm optimization, differential evolution, Jaya and whale optimization algorithms about both convergence rate in optimal searching and identification accuracy. The proposed method enables more accurate damage identification compared to methods solely based on the indicator of the residual of static damage identification equations or displacement residual. The results of identifying damage in the 21 element-truss structure and the static experiments on identifying damage in an aluminum alloy cantilever beam confirm the high efficiency of the proposed approach.
In a two-state Markov chain with time periodic dynamics, we study path properties such as the sojourn time in one state between two consecutive jumps or the distribution of the first jump. This is done in order to exhibit a resonance interval and an optimal tuning rate interpreting the phenomenon of stochastic resonance through quality notions related with interspike intervals. We consider two cases representing the reduced dynamics of particles diffusing in time periodic potentials: Markov chains with piecewise constant periodic infinitesimal generators and Markov chains with time-continuous periodic generators.
We provide a mathematical underpinning of the physically widely known phenomenon of stochastic resonance, i.e. the optimal noise-induced increase of a dynamical system's sensitivity and ability to amplify small periodic signals. The effect was first discovered in energy-balance models designed for a qualitative understanding of global glacial cycles. More recently, stochastic resonance has been rediscovered in more subtle and realistic simulations interpreting paleoclimatic data: the Dansgaard–Oeschger and Heinrich events. The underlying mathematical model is a diffusion in a periodically changing potential landscape with large forcing period. We study optimal tuning of the diffusion trajectories with the deterministic input forcing by means of the spectral power amplification measure. Our results contain a surprise: due to small fluctuations in the potential valley bottoms the diffusion — contrary to physical folklore — does not show tuning patterns corresponding to continuous time Markov chains which describe the reduced motion on the metastable states. This discrepancy can only be avoided for more robust notions of tuning, e.g. spectral amplification after elimination of the small fluctuations.
Complex systems may often be characterized by their hierarchical dynamics. In this paper we present a method and an operational algorithm that automatically infer this property in a broad range of systems — discrete stochastic processes. The main idea is to systematically explore the set of projections from the state space of a process to smaller state spaces, and to determine which of the projections impose Markovian dynamics on the coarser level. These projections, which we call Markov projections, then constitute the hierarchical dynamics of the system. The algorithm operates on time series or other statistics, so a priori knowledge of the intrinsic workings of a system is not required in order to determine its hierarchical dynamics. We illustrate the method by applying it to two simple processes — a finite state automaton and an iterated map.
A time-domain technique for estimating dynamic loads acting on a structure from structural response measured experimentally at a finite number of optimally placed sensors on the structure is presented. The technique relies on an existing solution method based on dynamic programming, which consists of a backward (inverse) time sweeping phase followed by a forward time sweeping phase. The dynamic programming method of load identification, similar to all other inverse methods, suffers from ill-conditioning. Small variations (noise) in response measurements can cause large errors in load estimates. The condition of the inverse problem, and hence the quality of load estimates, depends on the locations of sensors on the structure. There can be a large number of locations on a structure where sensors can potentially be mounted. A D-optimal design algorithm is used to arrive at optimal sensor locations such that the condition of the inverse problem is improved and precise load estimates are obtained. Another major limitation of the dynamic programming technique is that the computation time increases dramatically as the model size increases. To deal with this shortcoming, a technique based on Craig–Bampton model reduction is also proposed in this paper. Numerical results illustrate the effectiveness of the proposed technique in accurately recovering the loads imposed on discrete as well as continuous systems.
In this paper a method for the identification of simplified linear models for building structures is applied to the case when acceleration, rather than displacement, is measured. A frame from benchmark structural controller studies is simulated, and from the input-output data of these simulations, simplified models for the acceleration response of the frame are obtained that have far fewer degrees of freedom. One of these simplified models is used to design a controller, which is tested using an evaluation model from the benchmark controller studies and found to be effective.
In this paper, an efficient numerical approach is proposed to study free and forced vibration of complex one-dimensional (1D) periodic structures. The proposed method combines the advantages of component mode synthesis (CMS) and wave finite element method. It exploits the periodicity of the structure since only one unit cell is modelled. The model reduction based on CMS improves the computational efficiency of unit cell dynamics, avoiding ill-conditioning issues. The selection of reduced modal basis can reveal the influence of local dynamics on global behavior. The effectiveness of the proposed approach is illustrated via numerical examples.
An introductory, pedagogical review of the generalized Langevin equation (GLE) within the classical regime is presented. It is intended to be accessible to biophysicists with an interest in molecular dynamics (MD).
Section 1 presents why the equation may be of interest within biophysical modeling. A detailed elementary first principles derivation of the (multidimensional) Kac–Zwanzig model is presented. The literature is reviewed with a focus on biophysical applications and representation by Markovian stochastic differential equations. The relationship with the Mori–Zwanzig formalism is discussed. The framework of model reduction and approximation is emphasized. Some open problems are identified.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.