In this paper, we propose an algebraic approach to the definition of Hidden Markov Processes (HMP) which has two advantages: on the one hand, it prepares the ground for the quantum extension of these processes (discussed in the second part of this paper), on the other hand it suggests two natural extensions of these processes: one of them highlights a different notion of conditional independence between the observable and the hidden process; the other one drops the assumption that the hidden process is Markov. The latter extension is motivated by the examples of hidden, but not hidden Markov processes produced in the second part of the paper. Furthermore, we prove that any hidden Markov process can be interpreted as a restriction, to a suitable sub-algebra, of a Markov process.
The problem of the damping of the Giant Dipole Resonance (GDR) at finite temperature at T>2 MeV is discussed here. The experimental results are based on fusion evaporation reactions. The most recent results on the mass region A = 132 (Ce isotopes) obtained in exclusive measurements are compared with the existing results on the A = 110–120 region (Sn isotopes). The comparison with the theoretical predictions based on thermal shape fluctuations is also discussed. The GDR width is found to increase also in the region T>2 MeV and this is accounted by the combined effect of the increase of the compound nucleus width (smaller lifetime) and to the increase of the average deformation of the nucleus.
From the fractal properties of the hadron suggested by the statistical model, the colour factor ratio CACF has been derived and is found to be in exact agreement with the corresponding QCD prediction.
We have analyzed the available midrapidity (|y|<0.5) transverse momentum spectra of identified particles such as protons (p), kaons (k), k0s, Λ, Ω and Ξ for different centralities of Pb+Pb collisions at the LHC energy √sNN=2.76TeV. We have used our earlier proposed unified statistical thermal freeze-out model. The model incorporates the effect of nuclear transparency in such energetic collisions and the resulting asymmetry in the collective-flow profile along the longitudinal and the transverse directions. Our calculated results are found to be in good agreement with the experimental data measured by the ALICE experiment. The model calculation fits the experimental data for different particle species which provide thermal freeze-out conditions in terms of temperature and collective-flow parameters. The analysis shows a rise in the thermal freeze-out temperature and a mild decrease in the transverse collective-flow velocity as we go from central to the peripheral collisions. The baryon chemical potential is assumed to be nearly zero for the bulk of the matter (μB≈0), a situation expected in the heavy ion collisions at LHC energies in the Bjorken approach owing to nearly complete nuclear transparency. The contributions from the decay of the heavier resonances are also taken into account in our calculations.
We compare the Parton distributions deduced in the framework of a quantum statistical approach for both the longitudinal and transverse degrees of freedom with the unpolarized distributions measured at HERA and with the polarized ones proposed in a previous paper, which have been shown to be in very good agreement also with the results of experiments performed after that proposal. The agreement with HERA data in correspondence to very similar values for the “temperature” and the “potentials” found in the previous work gives a robust confirm of the statistical model. The unpolarized distributions are compared also with the result of NNPDF. The free parameters are fixed mainly by data in the range (0.1, 0.5) for the x variable, where the valence Partons dominate, and in the small x region for the diffractive contribution. This feature makes the parametrization proposed here very attractive.
The Bell inequality is thought to be a common constraint shared by all models of local hidden variables that aim to describe the entangled states of two qubits. Since the inequality is violated by the quantum mechanical description of these states, it purportedly allows distinguishing in an experimentally testable way the predictions of quantum mechanics from those of models of local hidden variables and, ultimately, ruling the latter out. In this paper, we show, however, that the models of local hidden variables constrained by the Bell inequality all share a subtle, though crucial, feature that is not required by fundamental physical principles and, hence, it might not be fulfilled in the actual experimental setup that tests the inequality. Indeed, the disputed feature neither can be properly implemented within the standard framework of quantum mechanics and it is even at odds with the fundamental principle of relativity. Namely, the proof of the inequality requires the existence of a preferred absolute frame of reference (supposedly provided by the lab) with respect to which the hidden properties of the entangled particles and the orientations of each one of the measurement devices that test them can be independently defined through a long sequence of realizations of the experiment. We notice, however, that while the relative orientation between the two measurement devices is a properly defined physical magnitude in every single realization of the experiment, their global rigid orientation with respect to a lab frame is a spurious gauge degree of freedom. Following this observation, we were able to explicitly build a model of local hidden variables that does not share the disputed feature and, hence, it is able to reproduce the predictions of quantum mechanics for the entangled states of two qubits.
The spin structure of lambda has its special importance in analyzing the spin content of other hadrons. Assuming hadrons as a cluster of quarks and gluons (generally referred as valence and sea), statistical approach has been applied to study spin distribution of lambda among quarks. We apply the principle of detailed balance to calculate the probability of various quark–gluon Fock states and check the impact of SU(3) breaking on these probabilities particularly in sea for the Fock states containing strange quark. The flavor probability when multiplied by spin and color multiplicities of these quark–gluon Fock states results in estimating the individual contributions from valence and sea. We conclude that breaking in symmetry significantly affects the polarization of quarks inside the hyperons.
We analyzed the identified hadron multiplicity predictions of the generalized thermodynamical model of the multiparticle production processes with nonextensive statistics. The multiplicities measured recently at LHC experiments seem to be consistent with this approach and thermodynamical parameter values are found already by analyzing transverse momentum distributions. The information about the mechanism of the strangeness suppression has been derived to some extent from multistrange hadron abundances.
In this paper we investigate the combined effect of quantization and clipping on multi-layer feedforward neural networks (MLFNN). Statistical models are used to analyze the effects of quantization in a digital implementation. We analyze the performance degradation caused as a function of the number of fixed-point and floating-point quantization bits in the MLFNN. To analyze a true nonlinear neuron, we adopt the uniform and normal probability distributions, compare the training performances with and without weight clipping, and derive in detail the effect of the quantization error on forward and backward propagation. No matter what distribution the initial weights comply with, the weights distribution will approximate a normal distribution for the training of floating-point or high-precision fixed-point quantization. Only when the number of quantization bits is very low, the weights distribution may cluster to ± 1 for the training with fixed-point quantization. We establish and analyze the relationships for a true nonlinear neuron between inputs and outputs bit resolution, the number of network layers and the performance degradation, based on statistical models of on-chip and off-chip training. Our experimental simulation results verify the presented theoretical analysis.
In this paper we present a novel technique for non-rigid medical image registration and correspondence finding based on a multiple-layer flexible mesh template matching technique. A statistical anatomical model is built in the form of a tetrahedral mesh, which incorporates both shape and density properties of the anatomical structure. After the affine transformation and global deformation of the model are computed by optimizing an energy function, a multiple-layer flexible mesh template matching is applied to find the vertex correspondence and achieve local deformation. The multiple-layer structure of the template can be used to describe different scale of anatomical features; furthermore, the template matching is flexible which makes the correspondence finding robust. A leave-one-out validation has been conducted to demonstrate the effectiveness and accuracy of our method.
The performance of electrical connectors can be significantly impacted by periodic variations in contact resistance caused by vibrational stress. Intermittent faults resulting from such stress are characterized by their random and fleeting nature, making it difficult to study and replicate them. This paper proposes a novel method for reproducing intermittent faults in electrical connectors. To implement this method, intermittent fault data are first collected from electrical connectors subjected to different vibration loads. Next, a statistical distribution model is constructed using kernel density estimation (KDE). Based on this model, a fault injector is designed to simulate intermittent faults under varying vibration loads. The simulated faults are then compared to real-world intermittent fault signals in a controlled environment to validate the accuracy of the method. The results demonstrate that the proposed method effectively reproduces intermittent faults in electrical connectors under varying vibration conditions. This approach can be used to better understand the behavior of connectors under vibrational stress and to develop more effective testing and fault diagnosis methods.
In this paper, we propose an automatic analyzing and transforming approach to L-system grammar extraction from real plants. Instead of using manually designed rules and cumbersome parameters, our method establishes the relationship between L-system grammars and the iterative trend of botanical entities, which reflect the endogenous factors that caused the plant branching process. To realize this goal, we use a digital camera to take multiple images of unfoliaged (leafless) plants and capture the topological and geometrical data of plant entities using image processing methods. The data then stored into specific data structures. A Hidden Markov based statistical model is then employed to reveal the hidden relations of plant entities which have been classified into categories based on their statistical properties extracted by a classic EM algorithm, the hidden relations have been integrated into the target L-system as grammars. Results show that our method is capable of automatically generating L-grammars for a given unfoliaged plant no matter what branching type it is belongs to.
The decay of highly excited nuclei is described as a sequence of binary processes involving emission of fragments in their ground, excited-bound and unbound states. Primary together with secondary decay products lead to the final mass distributions. Asymmetric mass splittings involving nucleon emission up to symmetric binary ones are treated according to a generalized Weisskopf evaporation formalism. This procedure is implemented in the Monte-Carlo multi-step statistical model code MECO (Multisequential Evaporation COde). We examine the evolution of the calculated final mass distributions in the decay of a light compound nucleus, as the initial excitation energy increases towards the limits of complete dissociation. Comparisons are made with the predictions of the transition-stage theory, as well as a consistent Weisskopf treatment in which the decay process is described by rate equations for the generation of different fragment species.
This review focuses on nuclear reactions in astrophysics and, more specifically, on reactions with light ions (nucleons and α particles) proceeding via the strong interaction. It is intended to present the basic definitions essential for studies in nuclear astrophysics, to point out the differences between nuclear reactions taking place in stars and in a terrestrial laboratory, and to illustrate some of the challenges to be faced in theoretical and experimental studies of those reactions. The discussion revolves around the relevant quantities for astrophysics, which are the astrophysical reaction rates. The sensitivity of the reaction rates to the uncertainties in the prediction of various nuclear properties is explored and some guidelines for experimentalists are also provided.
Cross-sections for 40Ca + α at low energies have been calculated from two different models and three different α-nucleus potentials. The first model determines the cross-sections from the barrier transmission in a real nuclear potential. Second, cross-sections are derived within the optical model (OM) using a complex nuclear potential. The excitation functions from barrier transmission are smooth, whereas the excitation functions from the OM show a significant sensitivity to the chosen imaginary potential. Cross-sections far below the Coulomb barrier are lower from barrier transmission than from the OM. This difference is explained by additional absorption in the tail of the imaginary part of the potential in the OM. At higher energies, the calculations from the two models and all α-nucleus potentials converge. Finally, in contradiction to another recent study where a double-folding potential failed in a WKB calculation, the applicability of double-folding potentials for 40Ca + α at low energies is clearly confirmed in the present analysis for the simple barrier transmission model and for the full OM calculation.
The effective reproduction number R(t), the average number of secondary cases that are generated by a single primary case at calendar time t, plays a critical role in interpreting the temporal transmission dynamics of an infectious disease epidemic, while the case fatality risk (CFR) is an indispensable measure of the severity of disease. In many instances, R(t) is estimated using the reported number of cases (i.e., the incidence data), but such report often does not arrive on time, and moreover, the rate of diagnosis could change as a function of time, especially if we handle diseases that involve substantial number of asymptomatic and mild infections and large outbreaks that go beyond the local capacity of reporting. In addition, CFR is well known to be prone to ascertainment bias, often erroneously overestimated. In this paper, we propose a joint estimation method of R(t) and CFR of Ebola virus disease (EVD), analyzing the early epidemic data of EVD from March to October 2014 and addressing the ascertainment bias in real time. To assess the reliability of the proposed method, coverage probabilities were computed. When ascertainment effort plays a role in interpreting the epidemiological dynamics, it is useful to analyze not only reported (confirmed or suspected) cases, but also the temporal distribution of deceased individuals to avoid any strong impact of time dependent changes in diagnosis and reporting.
In clinical research, knowledge of the mechanical behavior of bones is helpful for diagnostics and therapeutic processes and the failure of compact bones is a necessary study in clinical analysis, accidentology, and traumatology. The purpose of this paper is to analyse the failure properties of compact bones using a statistical model to interpret stress and strain measurements obtained by INSTRON and X-ray scanner devices. Samples were prepared from a lamellar structure of compact bovine bones and the density of each sample is controlled and taken to be constant (1.9 g/cm3). The experimental results data thus depend only on defects in the samples. This model may help physicians and surgeons predict bone failure when inserting a prosthesis, for example.
Metrologists are increasingly being faced with challenges in statistical data analysis and modeling, data reduction, and uncertainty evaluation, that require an ever more demanding and comprehensive analytical and computational toolkit as well as a strategy for communication of more complex results. For example, conventional assumptions of Gaussian (or normal) measurement errors may not apply, which then necessitates alternative procedures for uncertainty evaluation.
This contribution, aimed at metrologists whose specialized knowledge is in a particular area of science, and whose prior study of topics in probability or statistics will have been merely introductory, provides illustrative examples and suggestions for self-study. These examples aim to empower metrologists to attain a working level of concepts, statistical methods, and computational techniques from these particular areas, to become self-sufficient in the statistical analysis and modeling of their measurement data, and to feel comfortable evaluating, propagating, and communicating the associated measurement uncertainty.
The contribution also addresses modern computational requirements in measurement science. Since it is becoming clear to many metrologists that tools like Microsoft Excel, Libreoffice Calc, or Apple’s Numbers often are in-sufficiently flexible to address emerging needs, or simply fail to provide required specialized tools, this contribution includes accompanying R code with detailed explanations that will guide scientists through the use of a new computing tool.
To estimate the probabilities of HIV infection and HIV seroconversion and to compare HIV seroconversions from different populations, in this paper we have developed some statistical models and state space models for HIV infection and seroconversion. By combining these models with the multi-level Gibbs sampling procedures, in this paper we have developed some efficient methods to estimate simultaneously these probabilities and the state variables as well as other unknown parameters. By using the complete likelihood function, we have also developed a generalized likelihood ratio test for comparing several HIV seroconversion distributions. As an illustration, we have applied the models and the methods to some data generated by the cooperative study on HIV under IDU and cocaine crack users by the National Institute of Drug Abuse/NIH. Our results show that there are significant differences in HIV seroconversion and HIV infection between populations of IDU, homosexuals and individuals with both IDU and homosexual behavior. For homosexuals, IDU and homosexuals with IDU, the probability density functions of times to HIV infection and HIV seroconversion are bi-model curves with two peaks and with heavy weights on the right. The average window period is about 2.75 months. Also, there are significant differences between the death and retirement rates of S (susceptible) people and I (infected but not seroconverted) people in all populations.
We investigate the α-family of almost Kähler structures on the tangent bundle TS over a statistical model S. We show that TS becomes a Kähler manifold of constant holomorphic sectional curvature in particular cases.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.