Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The principle of stationary variance is advocated as a viable variational approach to quantum field theory (QFT). The method is based on the principle that the variance of energy should be at its minimum when the state of a quantum system reaches its best approximation for an eigenstate. While not too much popular in quantum mechanics (QM), the method is shown to be valuable in QFT and three special examples are given in very different areas ranging from Heisenberg model of antiferromagnetism (AF) to quantum electrodynamics (QED) and gauge theories.
The bound state solutions of the Schrödinger equation (SE) under a deformed hyperbolic potential are used in this paper to investigate the correlation between the information content, such as Shannon entropies and Fisher information, and the variance of a quantum system in both momentum and position spaces. The variance was obtained from the expectation moments in conjugated spaces and used to get the uncertainty products, Fisher information products and Shannon entropic sums for different potential parameters. These information measures were observed to vary with the potential parameters and obey their lowest bound inequalities. An increase in the Fisher information leads to a lower uncertainty and less information while decreases in the Fisher information result in higher uncertainties. We proposed a relationship between the Shannon entropies, Fisher information and the variance where an increase in the Fisher information leads to a lower Shannon entropy or a lower spread of the probability distribution and vice versa. The numerical results obtained with the proposed relations agree with the most utilized method in the literature. This current work simplifies the calculation of the Shannon entropies notably for complex potential functions.
We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.
The task of providing leading indicators of catastrophic regime shifts in ecosystems is fundamental in order to design management protocols for those systems. Here we address the problem of lake eutrophication (that is, nutrient enrichment leading to algal blooms) using a simple spatial lake model. We discuss and compare different spatial and temporal early warning signals announcing the catastrophic transition of an oligotrophic lake to eutrophic conditions. In particular, we consider the spatial variance and its associated patchiness of eutrophic water regions. We found that spatial variance increases as the lake approaches the point of transition to a eutrophic state. We also analyze the spatial and temporal early warnings in terms of the amount of information required by each and their respective forewarning times. From the consideration of different remedial procedures that can be followed after these early signals we conclude that some of these indicators are not early enough to avert the undesired impending shift.
We look at the issue of obtaining a variance like measure associated with probability distributions over ordinal sets. We call these dissonance measures. We specify some general properties desired in these dissonance measures. The centrality of the cumulative distribution function in formulating the concept of dissonance is pointed out. We introduce some specific examples of measures of dissonance.
The paper considers the analytical solution methods of the maximizing entropy or minimizing variance with fixed orness level problems and the maximizing orness with fixed entropy or variance value problems together. It proves that both of these two kinds of problems have common necessary conditions for their optimal solutions. The optimal solutions have the same forms and can be seen as the same OWA (ordered weighted averaging) weighting vectors from different points of view. The problems of minimizing orness problems with fixed entropy or variance constraints and their analytical solutions are proposed. Then these conclusions are extended to the corresponding RIM (regular increasing monotone) quantifier problems, which can be seen as the continuous case of OWA problems with free dimension. The analytical optimal solutions are obtained with variational methods.
Interval multi-objective linear programming (IMOLP) ímodels are one of the methods to tackle uncertainties. In this paper, we propose two methods to determine the efficient solutions in the IMOLP models through the expected value, variance and entropy operators which have good properties. One of the most important properties of these methods is to obtain different efficient solutions set according to decision makers’ preferences as available information. We first develop the concept of the expected value, variance and entropy operators on the set of intervals and study some properties of the expected value, variance and entropy operators. Then, we present an IMOLP model with uncertain parameters in the objective functions. In the first method, we use the expected value and variance operators in the IMOLP models and then we apply the weighted sum method to convert an IMOLP model into a multi-objective non-linear programming (MONLP) model. In the second method, the IMOLP model using the expected value, variance and entropy operators can be converted into a multi-objective linear programming (MOLP) model. The proposed methods are applicable for large scale models. Finally, to illustrate the efficiency of the proposed methods, numerical examples and two real-world models are solved.
The basic paradigm for decision making under uncertainty is introduced. A methodology is suggested for the calculation of the variance associated with each of the alternatives in the case when the uncertainty is not necessarily of a probabilistic nature.
This paper provides an approach for assessing the uncertainty associated with the estimate of the availability of a two-state repairable system. During the design stage it is often necessary to allocate scarce testing resources among various components in an efficient manner. Although there are a variety of importance and uncertainty measures for the reliability of a system, there are limited measures for systems availability. This study attempts to fill the gaps on availability importance measures and provide insights for techniques to reduce the variance of a system-level availability estimate efficiently. The variance importance measure is constructed such that it provides a measure of the improvement in the variance of the system level availability estimate through the reduction of the variance of the various component availability estimates. In addition, a cost model is developed that trades-off cost and uncertainty. The measure is illustrated for five common system structures. Monte Carlo Simulation is used to illustrate the use of the assessment tools on a specific problem. Observations conclude that results are consistent with reliability importance measures.
A stable dynamic system implies safety, reliability, and satisfactory performance. However, the determination of stability is very difficult when the system is nonlinear and when the ever present uncertainties in the components must be considered. Herein a response-based approach that uses both system and time information obtained through singular value decomposition is presented to determine the stability space of nonlinear, uncertain dynamic systems: any approximating linearization of the nonlinearities has been obviated. The approach extends previous work for linear systems that invoked only the variability of the left singular vectors to predict stability. In the new approach, the variability of the right singular vectors is augmented to that of the left singular vectors and it is shown that a simulation time span, as short as two or three periods, is sufficient to predict stability over the entire life-time dynamics rendering the method very efficient. The stability space is a subset of the design space and its robustness is proportional to the tolerances assigned to the random design variables. Errors due to sampling size, time increments, and number of singular vectors used are controllable. The method can be implemented with readily available software. A study of a practical engineering system with different tolerances and different time spans shows the efficacy of the proposed approach.
Measurement devices always add noise to the signal of interest and it is necessary to evaluate the variance of the results. This article focuses on stationary random processes whose power spectrum density is a power law of frequency. For flicker noise, behaving as 1/f and which is present in many different phenomena, the usual way to compute the variance leads to infinite values. This article proposes an alternative definition of the variance which takes into account the fact that measurement devises need to be calibrated. This new variance, which depends on the calibration duration, the measurement duration and the duration between the calibration and the measurement, allows avoiding infinite values when computing the variance of a measurement.
We consider a class of spiking neuron models, defined by a set of conditions which are typical for basic threshold-type models like leaky integrate-and-fire, or binding neuron model and also for some artificial neurons. A neuron is fed with a point renewal process. A relation between the three probability density functions (PDF): (i) PDF of input interspike intervals ISIs, (ii) PDF of output interspike intervals of a neuron with a feedback and (iii) PDF for that same neuron without feedback is derived. This allows to calculate any one of the three PDFs provided the remaining two are given. Similar relation between corresponding means and variances is derived. The relations are checked exactly for the binding neuron model stimulated with Poisson stream.
We consider a class of spiking neuronal models, defined by a set of conditions typical for basic threshold-type models, such as the leaky integrate-and-fire or the binding neuron model and also for some artificial neurons. A neuron is fed with a Poisson process. Each output impulse is applied to the neuron itself after a finite delay Δ. This impulse acts as being delivered through a fast Cl-type inhibitory synapse. We derive a general relation which allows calculating exactly the probability density function (pdf) p(t) of output interspike intervals of a neuron with feedback based on known pdf p0(t) for the same neuron without feedback and on the properties of the feedback line (the Δ value). Similar relations between corresponding moments are derived.
Furthermore, we prove that the initial segment of pdf p0(t) for a neuron with a fixed threshold level is the same for any neuron satisfying the imposed conditions and is completely determined by the input stream. For the Poisson input stream, we calculate that initial segment exactly and, based on it, obtain exactly the initial segment of pdf p(t) for a neuron with feedback. That is the initial segment of p(t) is model-independent as well. The obtained expressions are checked by means of Monte Carlo simulation. The course of p(t) has a pronounced peculiarity, which makes it impossible to approximate p(t) by Poisson or another simple stochastic process.
The paper compares the accuracy of two different three points Interpolated Discrete Fourier Transform (M3IpDFT and C3IpDFT) algorithms in the presence of Gaussian white noise. The above two algorithms use the magnitude and the complex value of spectrum line for interpolation, respectively. The theoretical expressions of variances of frequency estimation due to noise are derived and then verified by simulations. From the theoretical and simulation results, the variances of frequency estimation are inversely proportional to the signal-to-noise ratio (SNR) and the length of DFT. Simulation shows that the C3IpDFT outperforms the M3IpDFT.
Let f:𝕊1→𝕊1 be a C2+𝜖 expanding map of the circle and let v:𝕊1→ℝ be a C1+𝜖 function. Consider the twisted cohomological equation v=α∘f−Df⋅α, which has a unique bounded solution α. We show that α is either C1+𝜖 or continuous but nowhere differentiable. If α is nowhere differentiable then the Newton quotients of α, after an appropriated normalization, converges in distribution (with respect to the unique absolutely continuous invariant probability of f) to the normal distribution. In particular, α is not a Lipschitz continuous function on any subset with positive Lebesgue measure.
For a rotation by an irrational α on the circle and a BV function φ, we study the variance of the ergodic sums SLφ(x):=∑L−1j=0φ(x+jα). When α is not of constant type, we construct sequences (LN) such that, at some scale, the ergodic sums SLNφ satisfy an ASIP. Explicit non-degenerate examples are given with an application to the rectangular periodic billiard in the plane.
The major focus of this study is to describe the structure of a solution designed for robustly detecting and delineating the arterial blood pressure (ABP) signal events. To meet this end, first, the original ABP signal is pre-processed by application of à trous discrete wavelet transform (DWT) for extracting several dyadic scales. Then, a fixed sample size sliding window is moved on the appropriately selected scale and in each slid, six features namely as summation of the nonlinearly amplified Hilbert transform, summation of absolute first-order differentiation, summation of absolute second-order differentiation, curve length, area and variance of the excerpted segment are calculated. Then, all feature trends are normalized and utilized to construct a newly proposed principal components analyzed geometric index (PCAGI) (to be used as the segmentation decision statistic (DS)) by application of a linear orthonormal projection. After application of an adaptive-nonlinear transformation for making the DS baseline stationary, the histogram parameters of the enhanced DS are used to regulate the α-level Neyman–Pearson classifier for false alarm probability (FAP)-bounded delineation of the ABP events. In order to illustrate the capabilities of the presented algorithm, it was applied to all 18 subjects of the MIT-BIH Polysomnographic Database (359,000 beats) and the end-systolic and end-diastolic locations of the ABP signal as well as dicrotic notch pressure were extracted and values of sensitivity and positive predictivity Se = 99.86% and P+ = 99.95% were obtained for the detection of all ABP events. High robustness against measurement noises, acceptable detection-delineation accuracy of the ABP events in the presence of severe heart valvular and arrhythmic dysfunctions within a tolerable computational burden (processing time) and having no parameters dependency to the acquisition sampling frequency can be mentioned as the important merits and capabilities of the proposed PCAGI-based ABP events detection-segmentation algorithm.
In this paper, we extend existing variance-based sum uncertainty relations for pure states to those for mixed states by a mathematical approach. Furthermore, we show that the monotonicity of the standard deviation of observables induces a variance-based sum uncertainty relation. Finally, the multiobservable case is also discussed.
Quantum information-theoretic approach had been identified as a way to understand the foundations of quantum mechanics as early as 1950 due to Shannon. However, there hasn’t been enough advancement or rigorous development of the subject. In the following paper we try to find the relationship between a general quantum mechanical observable and von Neumann entropy. We find that the expectation values and the uncertainties of the observables have bounds which depend on the entropy. The results also show that von Neumann entropy is not just the uncertainty of the state but also encompasses the information about expectation values and uncertainties of any observable which depend on the observers choice for a particular measurement. Also a reverse uncertainty relation is derived for n quantum mechanical observables.
Innovation processes result from a series of decisions and these are influenced by the perceived risks and success metrics faced by the decision-maker. Aiming to understand whether innovation risks and success metrics change during and between innovations, four hypotheses were developed and a questionnaire-based survey was adopted targeting managers of mechanically based manufacturers. Respondents were asked to indicate the importance of perceived risks throughout specific innovations for four domains of risk: marketing, technical, organizational and financial. Respondents were also asked to identify changes in type and magnitude of innovation risk and success metric. Descriptive and statistical tests were conducted to analyse the data. The results suggest that innovation risk changes in type and magnitude during and between innovations and success metrics change in type and magnitude during innovation. This study calls for situation specific research to provide helpful advice to practitioners.