We discuss the systemic risk implied by the interbank exposures reconstructed with the maximum entropy (ME) method. The ME method severely underestimates the risk of interbank contagion by assuming a fully connected network, while in reality the structure of the interbank network is sparsely connected. Here, we formulate an algorithm for sparse network reconstruction, and we show numerically that it provides a more reliable estimation of the systemic risk.
It is here shown how to use pieces of macroscopic thermodynamics to generate microscopic probability distributions for generalized ensembles, thereby directly connecting macro-state-axiomatics with microscopic results.
The analysis of space-time data from complex, real-life phenomena requires the use of flexible and physically motivated covariance functions. In most cases, it is not possible to explicitly solve the equations of motion for the fields or the respective covariance functions. In the statistical literature, covariance functions are often based on mathematical constructions. In this paper, we propose deriving space-time covariance functions by solving “effective equations of motion”, which can be used as statistical representations of systems with diffusive behavior. In particular, we propose to formulate space-time covariance functions based on an equilibrium effective Hamiltonian using the linear response theory. The effective space-time dynamics is then generated by a stochastic perturbation around the equilibrium point of the classical field Hamiltonian leading to an associated Langevin equation. We employ a Hamiltonian which extends the classical Gaussian field theory by including a curvature term and leads to a diffusive Langevin equation. Finally, we derive new forms of space-time covariance functions.
The fundamental physics of cuprate superconductivity is still much deliberated after three decades of research. In contrast to phononic or polaronic roots, some major theories promote a magnetic origin. In this perspective, we review cuprate magnetism, as probed by muon-spin-rotation (μSR) in RBa2Cu3O7−δ (RBCO), Bi2Sr2CaCu2O8+x (Bi2212) and Tl2Ba2Ca2Cu3O10+x (Tl2223). Site-search RBCO studies show that muons localize and probe in locations away from the superconducting CuO2 planes. Maximum entropy (MaxEnt, ME) analysis of transverse field μSR data of GdBa2Cu3O7−δ (GdBCO) indicates that the muon probes in an undisturbed insulating environment, allowing μSR to detect (weak) magnetic features in these cuprates. Concerning Varma’s predicted loop currents, MaxEnt has shown weak μSR signals for GdBCO in zero field above and below the critical temperature, Tc; these are near the predicted ∼100 Oe. Concerning Zhang’s predicted antiferromagnetism (AF) connected to the vortex cores, we have observed Lorentzian relaxation of cuprate vortex signals below half Tc, consistent with AF-broadening effects. For both Bi2212 and Tl2223, Lorentzians describe the μSR vortex signals much better below 0.4Tc than Gaussians, indicating that extra AF fields occur near and in the vortex cores. In sum, both our MaxEnt-μSR (ME-μSR) studies point toward magnetic roots of cuprate superconductivity.
The maximum entropy approach operating with quite general entropy measure and constraint is considered. It is demonstrated that for a conditional or parametrized probability distribution f(x|μ), there is a "universal" relation among the entropy rate and the functions appearing in the constraint. This relation allows one to translate the specificities of the observed behavior θ(μ) into the amount of information on the relevant random variable x at different values of the parameter μ. It is shown that the recently proposed variational formulation of the entropic functional can be obtained as a consequence of this relation, that is from the maximum entropy principle. This resolves certain puzzling points that appeared in the variational approach.
In the visual context, a reasoning system should he capable of inferring a scene description using evidence derived from data-driven processing of the iconic image data. This evidence may consist of a set of curvilinear boundaries, which are obtained by grouping local edge data into extended features. Using linear primitives, a framework is described which represents the information contained in pre-formed models of possible objects in the scene, and in the segmented scenes themselves. A method based on maximum entropy is developed which assigns measures of likelihood for the presence of objects in the two-dimensional image. This method is applied to and evaluated on real and simulated image data, and the effectiveness of the approach is discussed.
The estimation of the crystallite orientation distribution function based on the leading texture coefficients can be rephrased as a maximum entropy moment problem. In this paper, we prove the solvability of these moment problems under quite general assumptions on the moment functions which carries over to general locally compact and σ-compact Hausdorff topological groups.
In an attempt to understand the nature of information processing at the neuronal level and its relation to normal cognition at the behavioral level, this paper presents a neurally plausible computational theory of probability inference inspired by certain biological properties of neurons. The probability inference that the neural networks of this theory solve is the production of either probabilities or likelihoods associated with one set of events based on another set of events. The theory describes how a single neuron might create a probability in an analog fashion. It describes how certain optimal computational principles required for probability inference (e.g., Bayes rule) might be implemented at the neuronal level despite apparent limitations of the computational capacity of neurons. In fact, it is the fundamental properties of neurons that lead to the particular neuronal computation. Finally, this report attempts to connect this neural network theory to existing psychological models that assume Bayes inference schemes.
Among all probability distributions, power law distribution is an intriguing one, which has been studied by many researchers. However, the derivation of power law distribution is still an inconclusive topic. For deriving a distribution, there are various methods, among which maximum entropy principle is a special one. Entropy of random permutation set (RPS), as an uncertainty measure of RPS, is a newly proposed entropy with special features. Deriving power law distribution with maximum entropy of RPS is a promising method. In this paper, certain constraints are given to constrain the entropy of RPS. Power law distribution is able to be finally derived with maximum entropy principle. Numerical experiments are done to show characters of proposed derivation.
Due to their simple applicability score systems are in widespread use as a tool for decision making.
Unfortunately, as we all feel, they are somehow not apt to take into account interdependencies among the values which are input to them when trying to decide an actual application case; a drawback which is overcome by the more powerful probabilistic systems. In order to analyze which assumptions are inherent in score systems, we translate them as faithfully as possible into probabilistic systems thus making available the technical machinery of the latter for this analysis task.
Such a translation (as also given in1) reveals indeed some properties of score systems, but leads to an exponential number of probabilistic rules, which rules it out for practical use. For this reason we also developed further translations into probabilistic systems, which keep the simplicity of a score system (i.e. they use the same amount of rules as the score system). Moreover, the resulting probabilistic systems show their structure more explicitly than score systems, and they are also open to the addition of further knowledge.
This paper considers the problem and appropriateness of filling-in missing conditional probabilities in causal networks by the use of maximum entropy. Results generalizing earlier work of Rhodes, Garside & Holmes are proved straightforwardly by the direct application of principles satisfied by the maximum entropy inference process under the assumed uniqueness of the maximum entropy solution. It is however demonstrated that the implicit assumption of uniqueness in the Rhodes, Garside & Holmes papers may fail even in the case of inverted trees. An alternative approach to filling in missing values using the limiting centre of mass inference process is then described which does not suffer this shortcoming, is trivially computationally feasible and arguably enjoys more justification in the context when the probabilities are objective (for example derived from frequencies) than by taking maximum entropy values.
In an expert system having a consistent set of linear constraints it is known that the Method of Tribus may be used to determine a probability distribution which exhibits maximised entropy. The method is extended here to include independence constraints (Accommodation).
The paper proceeds to discusses this extension, and its limitations, then goes on to advance a technique for determining a small set of independencies which can be added to the linear constraints required in a particular representation of an expert system called a causal network, so that the Maximum Entropy and Causal Networks methodologies give matching distributions (Emulation). This technique may also be applied in cases where no initial independencies are given and the linear constraints are incomplete, in order to provide an optimal ME fill-in for the missing information.
The desire to use Causal Networks as Expert Systems even when the causal information is incomplete and/or when non-causal information is available has led researchers to look into the possibility of utilising Maximum Entropy. If this approach is taken, the known information is supplemented by maximising entropy to provide a unique initial probability distribution which would otherwise have been a consequence of the known information and the independence relationships implied by the network. Traditional maximising techniques can be used if the constraints are linear but the independence relationships give rise to non-linear constraints. This paper extends traditional maximising techniques to incorporate those types of non-linear constraints that arise from the independence relationships and presents an algorithm for implementing the extended method. Maximising entropy does not involve the concept of "causal" information. Consequently, the extended method will accept any mutually consistent set of conditional probabilities and expressions of independence. The paper provides a small example of how this property can be used to provide complete causal information, for use in a causal network, when the known information is incomplete and not in a causal form.
We propose a new method for evaluating fixed strike Asian options using moments. In particular we show that the density of the logarithm of the arithmetic average is uniquely determined from its moments. Resorting to the maximum entropy density, we show that the first four moments are sufficient to recover with great accuracy the true density of the average. Then the Asian option price is estimated with high accuracy. We compare the proposed method with others based on the computation of moments.
Machine transliteration is automatic generation of the phonetic equivalents in a target language given a source language term, which is useful in many cross language applications. Transliteration between far distant languages, e.g. English and Chinese, is challenging because their phonological dissimilarities are significant. Existing techniques are typically rule-based or statistically noisy channel-based. Their accuracies are very low due to their intrinsic limitations on modeling transcription details. We propose direct statistical approaches on transliterating phoneme sequences for English–Chinese name translation. Aiming to improve performance, we propose two direct models: First, we adopt Finite State Automata on a process of direct mapping from English phonemes to a set of rudimentary Chinese phonetic symbols plus mapping units dynamically discovered from training. An effective algorithm for aligning phoneme chunks is proposed. Second, contextual features of each phoneme are taken into consideration by means of Maximum Entropy formalism, and the model is further refined with the precise alignment scheme using phoneme chunks. We compare our approaches with the noisy channel baseline that applies IBM SMT model, and demonstrate their superiority.
Most example-based machine translation (EBMT) systems handle their translation examples using some heuristic measures based on human intuition. However, these heuristic rules are usually hard to be effectively organized to scale to incorporate diverse features to cover more language phenomenon and large domains. In this paper, we use machine learning approach for EBMT model design instead of human intuition. Maximum entropy (ME) model is introduced in order to adequately incorporate different kinds of features inherited in the translation examples effectively. At the same time, a multi-dimensional feature space is formally constructed to include various features of different aspects. In the experiments, the proposed model shows significant performance improvement.
We perform a rigorous stochastic analysis of both deterministic and stochastic cellular automata. The theory uses a mesoscopic view, i.e. it works with probabilities instead of individual configurations used in micro-simulations. An exact stochastic analysis can be done using the theory of Markov processes. But this analysis is restricted to small problems only. For larger problems we compute the distribution using a factorization into marginals. These marginals are then approximated by the given marginals of low order with iterative proportional fitting using the maximum entropy principle. This method has been developed in probabilistic logic. Our method leads to a set of difference equations which have to be iterated numerically. We use the exact methods as well as our approximations to investigate the popular nonlinear voter model (NLVM). We show that the "phase transitions" regarded in recent papers are artifacts of the mean-field approximation. They do not show up in the real automata. There exist many mathematical peculiarities of the NLVM which raise doubts concerning the suitability of the model. As an alternative we propose the Exponential Voter Model which depends on a single parameter only, the inverse "temperature" β. Our proposed method to perform a stochastic analysis is not restricted to cellular automata, but can be applied to more general discrete stochastic systems.
We propose a novel screening method targeting genotype interactions associated with disease risks. The proposed method extends the maximum entropy conditional probability model to address disease occurrences over time. Continuous occurrence times are grouped into intervals. The model estimates the conditional distribution over the disease occurrence intervals given individual genotypes by maximizing the corresponding entropy subject to constraints linking genotype interactions to time intervals. The EM algorithm is employed to handle observations with uncertainty, for which the disease occurrence is censored. Stepwise greedy search is proposed to screen a large number of candidate constraints. The minimum description length is employed to select the optimal set of constraints. Extensive simulations show that five or so quantile-dependent intervals are sufficient to categorize disease outcomes into different risk groups. Performance depends on sample size, number of genotypes, and minor allele frequencies. The proposed method outperforms the likelihood ratio test, Lasso, and a previous maximum entropy method with only binary (disease occurrence, non-occurrence) outcomes. Finally, a GWAS study for type 1 diabetes patients is used to illustrate our method. Novel one-genotype and two-genotype interactions associated with neuropathy are identified.
A neutrosophic set (Ns), a part of neutrosophy theory, studies the origin, nature, and scope of neutralities, as well as their interactions with different ideational spectra. The neutrosophic set is a powerful general formal framework that has been recently proposed. However, the neutrosophic set needs to be specified from a technical point of view. We apply the neutrosophic set in image domain and define some concepts and operations for image thresholding.
The image G is transformed into Ns domain, which is described using three subsets T, I and F. The entropy in neutrosophic set is defined and employed to evaluate the indetermination. A new λ-mean operation is proposed to reduce the set's indetermination. Finally, the proposed method is employed to perform image thresholding. We have conducted experiments on a variety of images. The experimental results demonstrate that the proposed approach can select the thresholds automatically and effectively. Especially, it can process the "clean" images, the images with different kinds of noise and the images with multiple kinds of noise well without knowing the type of the noise, which is the most difficult task for image thresholding.
Due to overwhelming use of 3D models in video games and virtual environments, there is a growing interest in 3D scene generation, scene understanding and 3D model retrieval. In this paper, we introduce a data-driven 3D scene generation approach from a Maximum Entropy (MaxEnt) model selection perspective. Using this model selection criterion, new scenes can be sampled by matching a set of contextual constraints that are extracted from training and synthesized scenes. Starting from a set of randomly synthesized configurations of objects in 3D, the MaxEnt distribution is iteratively sampled and updated until the constraints between training and synthesized scenes match, indicating the generation of plausible synthesized 3D scenes. To illustrate the proposed methodology, we use 3D training desk scenes that are composed of seven predefined objects with different position, scale and orientation arrangements. After applying the MaxEnt framework, the synthesized scenes show that the proposed strategy can generate reasonably similar scenes to the training examples without any human supervision during sampling.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.