Cluster validation is an essential technique in all cluster applications. Several validation methods measure the accuracy of cluster structure. Typical methods are geometric, where only distance and membership form the core of validation. Yao's decision theory is a novel approach for cluster validation, which evolved loss calculations and probabilistic based measure for determining the cluster quality. Conventional rough set algorithms have utilized this validity measure. This paper propagates decision theory, an unprecedented validation scheme for Rough-Fuzzy clustering by resolving loss and probability calculations to predict the risk measure in clustering techniques. Experiments with synthetic and UCI datasets have been performed, proven to deduce the optimal number of clusters overcoming the downsides of traditional validation frameworks. The proposed index can also be applied to other clustering algorithms and extends the usefulness in business oriented data mining.
The purpose of this work is to provide theoretical foundations of, as well as some computational aspects on, a theory for analysing decisions under risk, when the available information is vague and imprecise. Many approaches to model unprecise information, e.g., by using interval methods, have prevailed. However, such representation models are unnecessarily restrictive since they do not admit discrimination between beliefs in different values, i.e., the epistemologically possible values have equal weights. In many situations, for instance, when the underlying information results from learning techniques based on variance analyses of statistical data, the expressibility must be extended for a more perceptive treatment of the decision situation. Our contribution herein is an approach for enabling a refinement of the representation model, allowing for an elaborated discrimination of possible values by using belief distributions with weak restrictions. We show how to derive admissible classes of local distributions from sets of global distributions and introduce measures expressing into which extent explicit local distributions can be used for modelling decision situations. As will turn out, this results in a theory that has very attractive features from a computational viewpoint.
We introduce a method of analyzing entanglement-enhanced quantum games on regular lattices of agents. Our method is valid for setups with periodic and non-periodic boundary conditions. To demonstrate our approach we study two different types games, namely the Prisoner's dilemma game and a cooperative Parrondo's game. In both cases we obtain results showing, that entanglement is a crucial resource necessary for the agents to achieve positive capital gain.
There is much empirical evidence that human decision-making under risk does not coincide with expected value maximization, and much effort has been invested into the development of descriptive theories of human decision-making involving risk (e.g. Prospect Theory). An open question is how behavior corresponding to these descriptive models could have been learned or arisen evolutionarily, as the described behavior differs from expected value maximization. We believe that the answer to this question lies, at least in part, in the interplay between risk-taking, sequentiality of choice, and population dynamics in evolutionary environments. In this paper, we provide the results of several evolutionary game simulations designed to study the risk behavior of agents in evolutionary environments. These include several evolutionary lottery games where sequential decisions are made between risky and safe choices, and an evolutionary version of the well-known stag hunt game. Our results show how agents that are sometimes risk-prone and sometimes risk-averse can outperform agents that make decisions solely based on the maximization of the local expected values of the outcomes, and how this can facilitate the evolution of cooperation in situations where cooperation entails risk.
One of the most complex systems is the human brain whose formalized functioning is characterized by decision theory. We present a "Quantum Decision Theory" of decision-making, based on the mathematical theory of separable Hilbert spaces. This mathematical structure captures the effect of superposition of composite prospects, including many incorporated intentions, which allows us to explain a variety of interesting fallacies and anomalies that have been reported to particularize the decision-making of real human beings. The theory describes entangled decision-making, non-commutativity of subsequent decisions, and intention interference of composite prospects. We demonstrate how the violation of the Savage's sure-thing principle (disjunction effect) can be explained as a result of the interference of intentions, when making decisions under uncertainty. The conjunction fallacy is also explained by the presence of the interference terms. We demonstrate that all known anomalies and paradoxes, documented in the context of classical decision theory, are reducible to just a few mathematical archetypes, all of which allow the finding of straightforward explanations in the frame of the developed quantum approach.
The idea is advanced that self-organization in complex systems can be treated as decision making (as it is performed by humans) and, vice versa, decision making is nothing but a kind of self-organization in the decision maker nervous systems. A mathematical formulation is suggested based on the definition of probabilities of system states, whose particular cases characterize the probabilities of structures, patterns, scenarios, or prospects. In this general framework, it is shown that the mathematical structures of self-organization and of decision making are identical. This makes it clear how self-organization can be seen as an endogenous decision making process and, reciprocally, decision making occurs via an endogenous self-organization. The approach is illustrated by phase transitions in large statistical systems, crossovers in small statistical systems, evolutions and revolutions in social and biological systems, structural self-organization in dynamical systems, and by the probabilistic formulation of classical and behavioral decision theories. In all these cases, self-organization is described as the process of evaluating the probabilities of macroscopic states or prospects in the search for a state with the largest probability. The general way of deriving the probability measure for classical systems is the principle of minimal information, that is, the conditional entropy maximization under given constraints. Behavioral biases of decision makers can be characterized in the same way as analogous to quantum fluctuations in natural systems.
Using established techniques from operations research (linear programming) and well-known measures of probability and value, we present a computational representation and evaluation of imperfect, imprecise user statements in decision analysis. The presentation starts with the structure of a decision problem and a model of the situation is discussed. The courses of action are represented by consequence sets in the decision structure. Statements could have an interval form to reflect the translation of the imperfect input data.
Statistical decision theory is utilized to discriminate between alternatives. Dealing with imprecise statements means encountering decision situations where different alternatives are to prefer in different parts of the consistent solution space to the constraints. Consequently, selection rules based on traditional admissibility are not enough to indicate preferred choices. New admissibility concepts and a procedure for analyzing such situations are discussed. An algorithm for optimizing bilinear functions that does not even require LP is presented.
There is an extensive literature on decision making under uncertainty. Unfortunately, up to date there are no valid decision principles. Experimental evidence has repeatedly shown that widely used principle of maximization of expected utility has serious shortcomings. Utility function and nonadditive measures used in nonexpected utility models are mainly considered as real-valued functions whereas in reality decision-relevant information is imprecise and therefore is described in natural language. This applies, in particular, to imprecise probabilities expressed by terms such as likely, unlikely, probable, etc. The principal objective of the paper is the development of computationally effective methods of decision making with imprecise probabilities. We present representation theorems for a nonexpected fuzzy utility function under imprecise probabilities. We develop an effective decision theory when the environment of fuzzy events, fuzzy states, fuzzy relations and fuzzy constraints are characterized by imprecise probabilities. The suggested methodology is applied for a real-life decision-making problem.
The influence of additional information on the decision making of agents, who are the interacting members of a society, is analyzed within the mathematical framework based on the use of quantum probabilities. The introduction of social interactions, which influence the decisions of individual agents, leads to a generalization of the quantum decision theory (QDT) developed earlier by the authors for separate individuals. The generalized approach is free of the standard paradoxes of classical decision theory. This approach also explains the error-attenuation effects observed for the paradoxes occurring when decision makers, who are members of a society, consult with each other, increasing in this way the available mutual information. A precise correspondence between QDT and classical utility theory is formulated via the introduction of an intermediate probabilistic version of utility theory of a novel form, which obeys the requirement that zero-utility prospects should have zero probability weights.
Cognitive experiments indicate the presence of discontinuities in brain dynamics during high-level cognitive processing. Non-linear dynamic theory of brains pioneered by Freeman explains the experimental findings through the theory of metastability and edge-of-criticality in cognitive systems, which are key properties associated with robust operation and fast and reliable decision making. Recently, neuropercolation has been proposed to model such critical behavior. Neuropercolation is a family of probabilistic models based on the mathematical theory of bootstrap percolations on lattices and random graphs and motivated by structural and dynamical properties of neural populations in the cortex. Neuropercolation exhibits phase transitions and it provides a novel mathematical tool for studying spatio-temporal dynamics of multi-stable systems. The present work reviews the theory of cognitive phase transitions based on neuropercolation models and outlines the implications to decision making in brains and in artificial designs.
Clues from psychology indicate that human cognition are not only based on classical probability theory as explained by Kolmogorov’s axioms but additionally on quantum probability. Quantum probabilities lead to the conclusion that our brain adapted to the Everett many-worlds reality trough the evolutionary process. The Everett many-worlds theory views reality as a many-branched tree in which every possible quantum outcome is realized. In this context, one of the cognitive brain functions is to provide a causally consistent explanation of events to maintain self-identity over time. Causality is related to a meaningful explanation. For impossible explanations, causality does not exist, and the identity of the self breaks. Only in meaningful causal worlds may personal identities exist.
The growth of cloud computing has been increasing fast since a few years ago, although it is still a small part of overall Information Technology (IT) spending of organizations. Both private and public sectors are embracing cloud computing as it offers an innovative computing model, simplifying IT resource management, offering cost savings and flexible scaling. The question is no longer whether to adopt cloud computing or not, but what should be adopted and how? The transaction cost economy theory offers a rationale for the adoption and the decision-making theory helps construct stages for the adoption and operate cloud computing to provide effective and optimal IT solutions for organizations. This paper offers decision makers to overview cloud computing, especially in utilizing values offered and selecting resources or operations that can be migrated to the cloud.
This paper describes the use of syntactical data fusion to computing possibilis-tic qualitative decisions. More precisely qualitative possibilistic decisions can be viewed as a data fusion problem of two particular possibility distributions: the first one representing the beliefs of an agent and the second representing the qualitative utility. The proposed algorithm compute optimistic optimal decisions based on data fusion techniques. We show that the computation of optimal decisions is equivalent to computing an inconsistency degree of possi-bilistic bases representing the fusion of agent's beliefs and agent's preferences. The algorithm for computing optimistic decisions is also presented. This allows us to give an alternative approach to a recent solution based on ATMS (Assumption-based Truth Maintenance System).
The mathematical foundations of statistics as a separate discipline were laid by Fisher, Neyman and Wald during the second quarter of the last century. Subsequent research in statistics and the courses taught in the universities are mostly based on the guidelines set by these pioneers. Statistics is used in some form or other in all areas of human endeavor from scientific research to optimum use of resources for social welfare, prediction and decision-making. However, there are controversies in statistics, especially in the choice of a model for data, use of prior probabilities and subject-matter judgments by experts. The same data analyzed by different consulting statisticians may lead to different conclusions.
What is the future of statistics in the present millennium dominated by information technology encompassing the whole of communications, interaction with intelligent systems, massive data bases, and complex information processing networks? The current statistical methodology based on simple probabilistic models developed for the analysis of small data sets appears to be inadequate to meet the needs of customers for quick on line processing of data and making the information available for practical use. Some methods are being put forward in the name of data mining for such purposes. A broad review of the current state of the art in statistics, its merits and demerits, and possible future developments will be presented.
The following sections are included:
If pattern generation is viewed as a stochastic process, which usually is appropriate for optical character recognition, speech recognition, and similar tasks, pattern classification can be based on the idea of function approximation. Starting from decision theory, it is shown in this chapter that minimum error decisions are obtained by the classifier if it approximates the a posteriori probabilities ruling the pattern source (Bayesian approach). Since for practical problems the properties of the stochastic pattern source are only implicitly given by the learning set, this set should truly represent the data population underlying the recognition task. Furthermore, the approximating functions should be taken from a family of functions which have — at least in principle — the power of approximating any desirable function (universal approximator). This contribution discusses in some detail the important paradigms for classification based on function approximation: polynomial classifier (including the linear approach) and multilayer perceptron, which are both global approximation schemes; radial basis functions, which are local approximators similar to nearest neighbor techniques; and finally classifiers which are based on certain distribution assumptions. All these paradigms, including different variants, are introduced based on a common notation, and special characteristics of the different approaches are pointed out using the classification of handwritten digits as a practical example.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.