Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This paper shows that a "principle of complete ignorance" plays a central role in decisions based on Dempster belief functions. Such belief functions occur when, in a first stage, a random message is received and then, in a second stage, a true state of nature obtains. The uncertainty about the random message in the first stage is assumed to be probabilized, in agreement with the Bayesian principles. For the uncertainty in the second stage no probabilities are given. The Bayesian and belief function approaches part ways in the processing of the uncertainty in the second stage. The Bayesian approach requires that this uncertainty also be probabilized, which may require a resort to subjective information. Belief functions follow the principle of complete ignorance in the second stage, which permits strict adherence to objective inputs.
A hierarchical clustering approach is proposed for reducing the number of focal elements in a crisp or fuzzy belief function, yielding strong inner and outer approximations. At each step of the proposed algorithm, two focal elements are merged, and the mass is transfered to their intersection or their union. The resulting approximations allow the calculation of lower and upper bounds on the belief and plausibility degrees induced by the conjunctive or disjunctive sum of any number of belief structures. Numerical experiments demonstrate the effectiveness of this approach.
This paper addresses the approximation of belief functions by probability functions where the approximation is based on minimizing the Euclidean distance. First of all, we simplify this optimization problem so it becomes equivalent to a standard problem in linear algebra. For the simplified optimization problem, we provide the analytic solution. Furthermore, we show that for Dempster-Shafer belief the simplified optimization problem is equivalent to the original one.
In terms of semantics, we compare the approximation of belief functions to various alternative approaches, e.g. pignistic transformation for Dempster-Shafer belief and Shapley value for fuzzy belief functions. For the later one, we give an example where the approximation method has some obvious statistical advantages.
Additionally, for the approximation of additive belief functions, we can provide a semantical justification.
Belief functions can only be combined by Dempster's rule when they are based on independent items of evidence. This paper proposes a method for handling the case where there is some probabilistic dependence among the items of evidence. The method relies on compact representations of joint probability distributions on the assumption variables associated with the belief functions. These distributions are then used to compute degrees of support of hypotheses of interest. It is shown that the theory of hints is the appropriate general framework for this method.
Evidence theory has been acknowledged as an important approach to dealing with uncertain, incomplete and imperfect information. In this framework, different formal techniques have been developed in order to address information aggregation and conflict handling. The variety of proposed models clearly demonstrates the range of possible underlying assumptions in combination rules. In this paper we present a review of some of the most important methods of combination and conflict handling in order to introduce a more generic rule for aggregation of uncertain evidence. We claim that the models based on mass multiplication can address the problem domains where randomness and stochastic independence is the dominant characteristic of information sources, although these assumptions are not always adhered to many practical cases. The proposed combination rule here is not only capable of retrieving other classical models, but also enables us to define new families of aggregation rules with more flexibility on dependency and normalization assumptions.
The belief structure resulting from the combination of consonant and independent marginal random sets is not, in general, consonant. Also, the complexity of such a structure grows exponentially with the number of combined random sets, making it quickly intractable for computations. In this paper, we propose a simple guaranteed consonant outer approximation of this structure. The complexity of this outer approximation does not increase with the number of marginal random sets (i.e., of dimensions), making it easier to handle in uncertainty propagation. Features and advantages of this outer approximation are then discussed, with the help of some illustrative examples.
In this paper we study some properties of the polytope of belief functions on a finite referential. These properties can be used in the problem of identification of a belief function from sample data. More concretely, we study the set of isometries, the set of invariant measures and the adjacency structure. From these results, we prove that the polytope of belief functions is not an order polytope if the referential has more than two elements. Similar results are obtained for plausibility functions.
This paper introduces two new fusion rules for combining quantitative basic belief assignments. These rules although very simple have not been proposed in literature so far and could serve as useful alternatives because of their low computation cost with respect to the recent advanced Proportional Conflict Redistribution rules developed in the DSmT framework.
Re-identification and record linkage are tools used to measure disclosure risk in data privacy. Given two data files, record linkage establishes links between those records that correspond to the same individual. These links are often expressed in terms of probability distributions.
This paper presents a review of a formalization of re-identification in terms of compatible belief functions. This formalization makes it possible to define the set of methods for re-identification that are relevant for the estimation of disclosure risk in privacy protection. Any re-identification method that does not fulfill the criteria for being in this set, may be discarded in a theoretical disclosure risk analysis.
The focus in this paper is on providing examples of how this formalization can be applied in a few different scenarios in data privacy
We investigate the role of some game solutions, such the Shapley and the Banzhaf values, as probability transformations. The first one coincides with the pignistic transformation proposed in the Transferable Belief Model; the second one is not efficient in general, leading us to consider its normalized version. We study a number of particular models of lower probabilities: minitive measures, coherent lower probabilities, as well as the lower probabilities induced by comparative or distortion models. For them, we provide some alternative expressions of the Shapley and Banzhaf values and study under which conditions they belong to the core of the lower probability.
Under an epistemic interpretation, an upper probability can be regarded as equivalent to the set of probability measures it dominates, sometimes referred to as its core. In this paper, we study the properties of the number of extreme points of the core of a possibility measure, and investigate in detail those associated with (uni- and bi-)variate p-boxes, that model the imprecise information about a cumulative distribution function.
A common procedure for selecting a particular density from a given class of densities is to choose one with maximum entropy. The problem addressed here is this. Let S be a finite set and let B be a belief function on 2S. Then B induces a density on 2S, which in turn induces a host of densities on S. Provide an algorithm for choosing from this host of densities one with maximum entropy.
This article is an extension of the results of two earlier articles. In [J. Schubert, “On nonspecific evidence”, Int. J. Intell. Syst. 8 (1993) 711–725] we established within Dempster-Shafer theory a criterion function called the metaconflict function. With this criterion we can partition into subsets a set of several pieces of evidence with propositions that are weakly specified in the sense that it may be uncertain to which event a proposition is referring. In a second article [J. Schubert, “Specifying nonspecific evidence”, in “Cluster-based specification techniques in Dempster-Shafer theory for an evidential intelligence analysis of multiple target tracks”, Ph.D. Thesis, TRITA-NA-9410, Royal Institute of Technology, Stockholm, 1994, ISBN 91–7170–801–4] we not only found the most plausible subset for each piece of evidence, we also found the plausibility for every subset that this piece of evidence belongs to the subset. In this article we aim to find a posterior probability distribution regarding the number of subsets. We use the idea that each piece of evidence in a subset supports the existence of that subset to the degree that this piece of evidence supports anything at all. From this we can derive a bpa that is concerned with the question of how many subsets we have. That bpa can then be combined with a given prior domain probability distribution in order to obtain the sought-after posterior domain distribution.
We use techniques from fuzzy mathematics to develop metrics for measuring how well the US is achieving its overarching national security goal: to protect itself, its allies and its friends from both nuclear attack and coercive pressures by states possessing nuclear weapons. The metrics are linear equations assigning weights to each of the six components of the overarching goal. These weights are based on expert opinions. We determine the degree to which the experts consider certain goals as the most important. We conclude by examining the degree of agreement among experts.