This paper presents a statistical framework for identifying circular flaws in structures using natural frequency data and Bayesian inference, explicitly addressing uncertainties arising from modeling errors and measurement noise. In this approach, the circular flaw is characterized by parameters such as the center coordinates and radius. The natural frequencies of the structure, measured under known boundary conditions, serve as the input data for the identification process. The smoothed finite element method (SFEM) forward model predicts the natural frequency shifts due to the presence of flaws and is integrated into the analysis. By combining observed frequency data with prior knowledge, Bayes’ theorem is employed to refine the probability distributions of the flaw parameters. The Markov chain Monte Carlo (MCMC) algorithm is utilized to sample from the posterior distributions of the parameters, ensuring robust uncertainty quantification. A numerical case study validates the proposed method, highlighting its accuracy and effectiveness in detecting and characterizing circular flaws.
Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated.
Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.
For M/EEG-based distributed source imaging, it has been established that the L2-norm-based methods are effective in imaging spatially extended sources, whereas the L1-norm-based methods are more suited for estimating focal and sparse sources. However, when the spatial extents of the sources are unknown a priori, the rationale for using either type of methods is not adequately supported. Bayesian inference by exploiting the spatio-temporal information of the patch sources holds great promise as a tool for adaptive source imaging, but both computational and methodological limitations remain to be overcome. In this paper, based on state-space modeling of the M/EEG data, we propose a fully data-driven and scalable algorithm, termed STRAPS, for M/EEG patch source imaging on high-resolution cortices. Unlike the existing algorithms, the recursive penalized least squares (RPLS) procedure is employed to efficiently estimate the source activities as opposed to the computationally demanding Kalman filtering/smoothing. Furthermore, the coefficients of the multivariate autoregressive (MVAR) model characterizing the spatial-temporal dynamics of the patch sources are estimated in a principled manner via empirical Bayes. Extensive numerical experiments demonstrate STRAPS's excellent performance in the estimation of locations, spatial extents and amplitudes of the patch sources with varying spatial extents.
We present the results of a Bayesian analysis of a Regge model for K+Λ photoproduction. The model is based on the exchange of K+(494) and K*+(892) trajectories in the t-channel. For different prior widths, we find decisive Bayesian evidence (Δ ln Ƶ ≈ 24) for a K+Λ photoproduction Regge model with a positive vector coupling and a negative tensor coupling constant for the K*+(892) trajectory, and a rotating phase factor for both trajectories. Using the χ2 minimization method, one could not draw this conclusion from the same dataset.
In this paper we generalize the belief propagation based rigid shape matching algorithm to a nonparametric belief propagation based on parameterized shape matching. We construct a local-global shape descriptor based cost function to compare the distances among landmarks in each data set, which is equivalent to the Hamiltonian of a spin glass. The constructed cost function is immune to rigid transformations, therefore the parameterized shape matching can be achieved by searching for the optimal shape parameter and the correspondence assignment that minimize the cost function. The optimization procedure is then approximated by a Monte Carlo simulation based MAP estimation on a graphical model, i.e. the nonparametric belief propagation. Experiments on a principal component analysis (PCA) based point distribution model (PDM) of the proximal femur illustrate the effects of two key factors, the topology of the graphical model and the renormalization of the shape parameters of the parameterized shape. Other factors that can influence its performance and its computational complexity are also discussed.
The hybrid Monte Carlo (HMC) algorithm is applied for the Bayesian inference of the stochastic volatility (SV) model. We use the HMC algorithm for the Markov chain Monte Carlo updates of volatility variables of the SV model. First we compute parameters of the SV model by using the artificial financial data and compare the results from the HMC algorithm with those from the Metropolis algorithm. We find that the HMC algorithm decorrelates the volatility variables faster than the Metropolis algorithm. Second we make an empirical study for the time series of the Nikkei 225 stock index by the HMC algorithm. We find the similar correlation behavior for the sampled data to the results from the artificial financial data and obtain a ϕ value close to one (ϕ ≈ 0.977), which means that the time series has the strong persistency of the volatility shock.
In an attempt to understand the nature of information processing at the neuronal level and its relation to normal cognition at the behavioral level, this paper presents a neurally plausible computational theory of probability inference inspired by certain biological properties of neurons. The probability inference that the neural networks of this theory solve is the production of either probabilities or likelihoods associated with one set of events based on another set of events. The theory describes how a single neuron might create a probability in an analog fashion. It describes how certain optimal computational principles required for probability inference (e.g., Bayes rule) might be implemented at the neuronal level despite apparent limitations of the computational capacity of neurons. In fact, it is the fundamental properties of neurons that lead to the particular neuronal computation. Finally, this report attempts to connect this neural network theory to existing psychological models that assume Bayes inference schemes.
Unreplicated factorial designs are widely used for designed experimentation in industry. In the analysis of designed experiments, the experimental factors influencing the response must be identified and separated from those that do not. An abundance of procedures intended to perform this selection have been introduced in the literature. A recent study indicated that the procedure due to Box and Meyer outperforms the lot of the other selection procedures in terms of efficiency and robustness. The procedure of Box and Meyer rests on a quasi-Bayesian foundation and utilizes generic domain knowledge, in the form of a common-for-all-factors a priori probability, that a factor significantly influences the response, to calculate an a posteriori probability for each factor. This paper suggests a strategy for introducing more elaborate domain knowledge about the experimental factors in the procedure of Box and Meyer, aiming to further improve its performance.
The reliability of a repairable system that is either improving or deteriorating depends on the system's chronological age. If such a system undergoes "minimal repair" at the occurrence of each failure so that the rate of system failures is not disturbed by the repair, then a nonhomogeneous Poisson process (NHPP) may be used to model the "age-dependent" reliability of the system. The power law process (PLP) is a model within the class of NHPP models and is commonly used as a model for describing the failure times of a repairable system. We introduce a new model that is an extension of the PLP model: the power law process change-point model. This model is capable of describing the failure times of particular types of repairable systems that experience a single change in their rates of occurrence of failures. Bayesian inference procedures for this model are developed.
The classical Gumbel probability distribution is modified in order to study the failure times of a given system. Bayesian estimates of the reliability function under five different parametric priors and the square error loss are studied. The Bayesian reliability estimate under the non-parametric kernel density prior is compared with those under the parametric priors and numerical computations are given to study their effectiveness.
This paper provides Bayesian and classical inference of Stress–Strength reliability parameter, R=P[Y<X], where both X and Y are independently distributed as 3-parameter generalized linear failure rate (GLFR) random variables with different parameters. Due to importance of stress–strength models in various fields of engineering, we here address the maximum likelihood estimator (MLE) of R and the corresponding interval estimate using some efficient numerical methods. The Bayes estimates of R are derived, considering squared error loss functions. Because the Bayes estimates could not be expressed in closed forms, we employ a Markov Chain Monte Carlo procedure to calculate approximate Bayes estimates. To evaluate the performances of different estimators, extensive simulations are implemented and also real datasets are analyzed.
The underground high-voltage power transmission cables are high value engineering assets that suffer from multiple deteriorations through-out life cycles. Recent studies identified a new failure mode – the pitting corrosion deterioration on the layer of phosphor bronze reinforcing tape, which protects the oil-filled power transmission cables from oil leakage due to deterioration of the leads heath. Two models estimating the phosphor bronze tape life were established separately in this study. The first model, based on mathematical fitting, is generated using a replacement priority model from the power supply industry. This is considered as an empirical-based model. The second model, based on the corrosion fatigue mechanism, utilizes the information of the pit depth distribution and the concept of pit-to-crack transfer probability. The Bayesian inference approach is the conjunction algorithm to update the existing probability of failure (PoF) model with the newly identified failure modes. Through this algorithm, the integrated PoF model contains a more comprehensive background information while maintaining the empirical knowledge on the engineering assets’ performance.
This paper develops a Bayesian model for pricing derivative securities with prior information on volatility. Prior information is given in terms of expected values of levels and rates of precision: the inverse of variance. We provide several approximate formulas, for valuing European call options, on the basis of asymptotic and polynomial approximations of Bessel functions.
Load effect characterization under traffic flow has received tremendous attention in bridge engineering, and uncertainty quantification (UQ) of load effect is critical in the inference process. Bayesian probabilistic approach is developed to overcome the unreliable issue caused by negligence of uncertainty of parametric and modeling aspects. Stochastic traffic load simulation is conducted by embedding the random inflow component into the Nagel–Schreckenberg (NS) model, and load effects are calculated by stochastic traffic load samples and influence lines. Two levels of UQ are performed for traffic load effect characterization: at parametric level of UQ, not only the optimal parameter values but also the associated uncertainties are identified; at model level of UQ, rather than using a single prescribed probability model for load effects, a set of probability distribution model candidates is proposed, and model probability of each candidate is evaluated for selecting the most suitable/plausible probability distribution model. Analytic work was done to give closed-form solutions for the expression involved in both parametric and model UQ. In the simulated examples, the efficiency and robustness of the proposed approach are firstly validated, and UQ are performed to different load effect data achieved by varying the structural span length under the changing total traffic volume. It turns out that the uncertainties of load effects are traffic-specific and response-specific, so it is important to conduct UQ of load effects under different traffic scenarios by using the developed approach.
This paper deals with an indirect health monitoring strategy for bridges using an instrumented vehicle. Thermodynamic principles are used to relate the change in Vehicle–Bridge-Interaction (VBI) forces to the change in dynamic tyre pressure. The damage identification process involves two stages. In the first stage, the unknown tyre model parameters are estimated using Bayesian inference based on the calibration data. The approach uses a Stein variational gradient descent implementation of the Bayes rule to quantify the uncertainty in the estimated tyre parameters. In the second stage, the calibrated tyre model is used to reconstruct the change in VBI force from measured tyre pressure data considering a damaged bridge. It is observed that damage present in the bridge produces notable changes in VBI force. Contour plots based on VBI force and natural frequency are developed for damage detection. The reconstructed VBI force change is used to quantify damage using the contour plots. Further, the least square estimation approach is adopted for damage identification by defining appropriate objective functions and imposing constraints on the damage indicators. The damage is estimated by minimizing the objective function using Cuckoo search algorithm. Numerical experiments reveal that the developed method could be used for accurate damage identification in the presence of measurement noise, uncertainty in estimated tyre parameters, and the uncertainty in bridge model parameters.
Compressive sampling (CS) is a novel signal processing paradigm whereby the data compression is performed simultaneously with the sampling, by measuring some linear functionals of original signals in the analog domain. Once the signal is sparse sufficiently under some bases, it is strictly guaranteed to stably decompress/reconstruct the original one from significantly fewer measurements than that required by the sampling theorem, bringing considerable practical convenience. In the field of civil engineering, there are massive application scenarios for CS, as many civil engineering problems can be formulated as sparse inverse problems with linear measurements. In recent years, CS has gained extensive theoretical developments and many practical applications in civil engineering. Inevitable modelling and measurement uncertainties have motivated the Bayesian probabilistic perspective into the inverse problem of CS reconstruction. Furthermore, the advancement of deep learning techniques for efficient representation has also contributed to the elimination of the strict assumption of sparsity in CS. This paper reviews the advancements and applications of CS in civil engineering, focusing on challenges arising from data acquisition and analysis. The reviewed theories also have applicability to inverse problems in broader scientific fields.
Accurately identifying the axle loads of the moving train on the railway bridge can provide reliable information for assessing the safety of the train–bridge system. Bridge weigh-in-motion, namely, BWIM, is an effective approach for identifying the positions and weights of the train axles based on the monitored bridge responses. Existing BWIM methods generally focus on identifying the probable value of the axle weights instead of quantifying the identification uncertainty. To address this issue, a novel two-stage train load identification framework for the medium-small railway bridge is developed by combining the virtual axle theory and Bayesian inference. In the first stage, the axle configuration including the axle number, axle spacing and axle weight of the moving train, are estimated according to the modified virtual axle theory in which a clustering algorithm is embedded to automatically determine the axle number. In the second stage, the most probable value (MPV) and the uncertainty of the train axle weight are accurately identified using the Bayesian inference method which takes five types of error patterns into consideration. Finally, the proposed framework is verified using the data from numerical simulations and an in-situ railway bridge. Results show that the proposed framework can improve the accuracy of train load identification after quantifying the uncertainty of estimated axle weights and can confirm the confidence interval of the individual axle weight and gross train weight.
Efficient probabilistic prediction of seismic responses is crucial for assessing the seismic-resistant capability of long-span continuous girder high-speed railway bridges. Thus, a Bayesian physics-informed neural network (BPINN) is adopted to rapidly and effectively predict the probabilistic seismic responses of such bridges. The BPINN model combines deep learning and physics to improve the accuracy and consistency of predictions, while also quantifying the uncertainties using Bayesian inference methods. Various seismic excitations, including pulse, far-field and near-field types, are employed to probabilistically predict the seismic responses of the top of bridge pier. Evaluation metrics, including mean squared error and prediction interval coverage probability, are used to assess the deterministic and probabilistic estimates of BPINN. Results demonstrate that BPINN performs better in deterministic results for far-/near-field ground motions compared to pulse-like earthquakes, with most cases exhibiting a close approximation to the 95% confidence interval. The flexibility and adaptability of BPINN in handling different types of ground motions can provide valuable insights for assessing the seismic performance of such structures.
A Bayesian inference technique, able to encompass stochastic nonlinear systems, is described. It is applicable to differential equations with delay and enables values of model parameters, delay, and noise intensity to be inferred from measured time series. The procedure is demonstrated on a very simple one-dimensional model system, and then applied to inference of parameters in the Mackey-Glass model of the respiratory control system based on measurements of ventilation in a healthy subject. It is concluded that the technique offers a promising tool for investigating cardiovascular interactions.
There have been various attempts to improve the reconstruction of gene regulatory networks from microarray data by the systematic integration of biological prior knowledge. Our approach is based on pioneering work by Imoto et al.11 where the prior knowledge is expressed in terms of energy functions, from which a prior distribution over network structures is obtained in the form of a Gibbs distribution. The hyperparameters of this distribution represent the weights associated with the prior knowledge relative to the data. We have derived and tested a Markov chain Monte Carlo (MCMC) scheme for sampling networks and hyperparameters simultaneously from the posterior distribution, thereby automatically learning how to trade off information from the prior knowledge and the data. We have extended this approach to a Bayesian coupling scheme for learning gene regulatory networks from a combination of related data sets, which were obtained under different experimental conditions and are therefore potentially associated with different active subpathways. The proposed coupling scheme is a compromise between (1) learning networks from the different subsets separately, whereby no information between the different experiments is shared; and (2) learning networks from a monolithic fusion of the individual data sets, which does not provide any mechanism for uncovering differences between the network structures associated with the different experimental conditions. We have assessed the viability of all proposed methods on data related to the Raf signaling pathway, generated both synthetically and in cytometry experiments.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.