This study proposes an agent-based quantum-like model to investigate the individual selection among three or more lotteries while incorporating the decision-making risk and uncertainty. We extend the classical expected utility functions with quantum probabilities and construct the compound belief state to compare one specific lottery belief state against others. The involved decision-making process is represented formally by the comparison operator, which can be decomposed into a few subprocesses. We give an example of individual lottery selection from three lotteries to illustrate the model. Finally, we propose ways to select from more than three lotteries.
This paper provides a literature review addressing the use of soft sets in medical diagnosis. Distinguishing itself from the existing literature, the study offers a comprehensive analysis of how fuzzy soft sets can be integrated into diagnostic processes, highlighting a novel fusion of fuzzy and soft sets in medical applications. Any soft set on a countably infinite universe can be regarded as a fuzzy set, recognizing the limitations of traditional diagnostic tools in dealing with vague and incomplete information, our research aims to utilize the flexibility and comprehensiveness of fuzzy soft sets to enhance decision-making accuracy in medical scenarios. The primary objective of this research is to present a thorough and critical analysis of fuzzy soft set theory in medical diagnosis, aiming to establish it as a fundamental approach in the field. By combining all of these results, we can generate a comprehensive picture of the connections between the many theories that account for fuzziness and imprecision, which helps to fill in the blanks left by recent surveys. The review will focus on identifying the main research trends in this field: the primary research topics that soft set theory addresses in medical applications, the community is currently facing and the major theoretical concepts used to study these topics. The primary objective of this review is to assist in the identifying of emerging research directions. Some of these trends are promising and will help shape a new role for soft set theory in the area of medical applications. The fusion of fuzzy and soft sets in medical applications represents a crucial and necessary stage in the research and diagnosis procedures. Key results include the study on development of algorithms and models that outperform traditional methods in accuracy and reliability.
Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.
Tramway transit has an important place within the public transportation system of Belgrade. However, due to the very unfavorable age structure, the bad condition of tramway tracks and infrastructure, as well as the maintenance system that require significant advancement, Belgrade tramways are in very bad repair, so the transport requirements are not properly met. The principal task of the analysis presented in this paper is to recognize and estimate the justifiability of investment into various solutions for revitalization of Belgrade tramway rolling stock. We have chosen a somewhat different from usual approach to decision making, that is, applied a combination of cost-benefit, life-cycle cost and multi-criteria analysis.
Fluid contamination is one of the main reasons for the wear failure and the related downtime in a hydraulic power system. Filters play an important role in controlling the contamination effectively, increasing the reliability of the system, and maintaining the system economically. Due to the uncertainties of system parameters, the complicated relationship among components, as well as the lack of effective approach, managing filters is becoming one of the biggest challenges for engineers and decision makers. In this study, a robust interval-based minimax-regret analysis (RIMA) method is developed for the filter management in a fluid power system (FPS) under uncertainty. The RIMA method can handle the uncertainties existed in contaminant ingressions of the system and contaminant holding capacity of filters without making assumption on probabilistic distributions for random variables. Through analyzing the system cost of all possible filter management alternatives, an interval element regret matrix can be obtained, which enables decision makers to identify the optimal filter management strategy under uncertainty. The results of a case study indicate that the reasonable solutions generated can help decision makers understand the consequence of short-term and long-term decisions, identify optimal strategies for filter allocation and selection with minimized system-maintenance cost and system-failure risk.
When decisions are made under uncertainty (DMUU), the decision maker either disposes of an interval of possible profits for each alternative (the interval DMUU) or disposes of a discrete set of payoffs for each decision and then the amount of the profit related to a given alternative depends on the state of nature (the scenario DMUU). Existing methods, used to generate the ranking of decisions and applied to the second problem mentioned, take, to a different extent, into consideration how particular profits assigned to alternatives are ordered in the payoff matrix and what the position of a given outcome is in comparison with other outcomes for the same state of nature. The author proposes and describes several alternative procedures that enable connecting the structure of the payoff matrix with the selected decision. These methods are adjusted to the purpose and the nature of the decision maker. They refer to the Savage’s approach, to the maximin joy criterion, to the normalization technique and to some elements used in expected utility maximization and prospect theory.
Balanced colorings of networks classify robust synchrony patterns — those that are defined by subspaces that are flow-invariant for all admissible ODEs. In symmetric networks, the obvious balanced colorings are orbit colorings, where colors correspond to orbits of a subgroup of the symmetry group. All other balanced colorings are said to be exotic. We analyze balanced colorings for two closely related types of network encountered in applications: trained Wilson networks, which occur in models of binocular rivalry, and opinion networks, which occur in models of decision making. We give two examples of exotic colorings which apply to both types of network, and prove that Wilson networks with at most two learned patterns have no exotic colorings. We discuss in general terms how exotic colorings affect the existence and stability of branches for local bifurcations of the corresponding model ODEs, both to equilibria and to periodic states.
The process of engineering software-intensive systems that comply with their Certification and Accreditation (C&A) requirements involves many critical decision-making activities for the related stakeholders. Considering the exhaustive nature of C&A activities together with the complexity of software-intensive systems, effective decision making relies heavily on the ways to understand and structure the problem domain concepts concerning decision points for interpretation, applicability, scope, evaluation, and impact of the enforced C&A requirements. These decision points are further complicated by natural language specifications of inherently non-functional C&A requirements scattered across multiple regulatory documents with complex interdependencies at different levels of abstractions in the organizational hierarchy, which often result in subjective interpretations and non-standard implementations of the C&A process. To address these issues, we define a systematic methodology using novel techniques from software Requirements Engineering (RE) and knowledge engineering for understanding and structuring the problem domain concepts based on a uniform representation format that promotes common understanding among stakeholders. Specifically, we use advanced ontological engineering techniques driven by theoretical RE foundations to systematically elicit, model, understand, and analyze problem domain concepts concerning significant and difficult decision points throughout the C&A process. We demonstrate the appropriateness of our methodology in creating decision support problem domain ontology using several examples derived from our experiences on automating the Department of Defense Information Technology Security C&A Process (DITSCAP).
Coronavirus Disease 2019 (COVID-19) is a zoonotic illness which has spread rapidly and widely since December, 2019, and is identified as a global pandemic by the World Health Organization. The pandemic to date has been characterized by ongoing cluster community transmission. Quarantine intervention to prevent and control the transmission are expected to have a substantial impact on delaying the growth and mitigating the size of the epidemic. To our best knowledge, our study is among the initial efforts to analyze the interplay between transmission dynamics and quarantine intervention of the COVID-19 outbreak in a cluster community. In the paper, we propose a novel Transmission-Quarantine epidemiological model by nonlinear ordinary differential equations system. With the use of detailed epidemiologic data from the Cruise ship “Diamond Princess”, we design a Transmission-Quarantine work-flow to determine the optimal case-specific parameters, and validate the proposed model by comparing the simulated curve with the real data. First, we apply a general SEIR-type epidemic model to study the transmission dynamics of COVID-19 without quarantine intervention, and present the analytic and simulation results for the epidemiological parameters such as the basic reproduction number, the maximal scale of infectious cases, the instant number of recovered cases, the popularity level and the final scope of the epidemic of COVID-19. Second, we adopt the proposed Transmission-Quarantine interplay model to predict the varying trend of COVID-19 with quarantine intervention, and compare the transmission dynamics with and without quarantine to illustrate the effectiveness of the quarantine measure, which indicates that with quarantine intervention, the number of infectious cases in 7 days decrease by about 60%, compared with the scenario of no intervention. Finally, we conduct sensitivity analysis to simulate the impacts of different parameters and different quarantine measures, and identify the optimal quarantine strategy that will be used by the decision makers to achieve the maximal protection of population with the minimal interruption of economic and social development.
As the framework of probabilistic graphical models becomes increasingly popular for knowledge representation and inference, the need for efficient tools for its support is increasing. The Hugin Tool is a general purpose tool for construction, maintenance, and deployment of Bayesian networks and influence diagrams. This paper surveys the key functionality of the Hugin Tool and reports on new advances of the tool. Furthermore, an empirical analysis reports on the efficiency of the Hugin Tool on common inference and learning tasks.
Collaborative decision making is a core organizational activity that comprises a series of knowledge representation and processing tasks. Moreover, it is often carried out through argumentative discourses between the stakeholders involved. This paper exploits and elaborates on the synergy that occurs between the decision making and knowledge management processes in such contexts. The proposed multidisciplinary approach is supported by a web-based software tool. Being based on a well-defined ontology model, our approach facilitates decision makers in achieving a common understanding, while also enhancing collaboration and exploitation of organizational knowledge resources. Strategy development is the particular knowledge domain considered in this paper to demonstrate the applicability of the proposed tool.
Decision-making on uncertain and dynamic domains is still a challenging research area. This paper explores a solution to handle such complex decision making based on a combined logic system. We provide an explanation of our reasoning system focused on the algorithms and their implementations. The reasoning system is based on a multi-valued temporal propositional logic which we use as the foundation for the implementation of simulation/prediction and query answering tools. This system is available for users to represent knowledge and to refine these systems to debug them and to try different problem solving strategies. We provide examples to illustrate how the system can be used including a problem based on a real smart environment.
Diffusion geometry offers a fresh perspective on multi-scale information analysis, which is critical to multiagent systems that need to process massive data sets. A recent study has shown that when the "diffusion distance" concept is applied to human decision experiences, its performance on solution synthesis can be significantly better than using Euclidean distance. However, as a data set expands over time, it can quickly exceed the processing capacity of a single agent. In this paper, we proposed a multi-agent diffusion approach where a massive data set is split into several subsets and each diffusion agent only needs to work with one subset in diffusion computation. We conducted experiments with different splitting strategies applied to a set of decision experiences. The result indicates that the multi-agent diffusion approach is beneficial, and it is even possible to benefit from using a larger group of diffusion agents if their subsets have common experiences and pairly-shared experiences. Our study also shows that system performance could be affected significantly by the splitting granularity (size of each splitting unit). This study paves the road for applying the multi-agent diffusion approach to massive data analysis.
Medical reasoning describes a form of qualitative inquiry that examines the cognitive (thought) processes involved in making medical decision. In this field the goal for diagnostic reasoning is assessing causes of observed conditions in order to make informed choices about treatment. In order to design a diagnostic reasoning method we merge ideas from a hypothetic-deductive method and the Domino model. In this setting, we introduce the so called Hypothetic-Deductive-Domino (HD-D) algorithm. In addition, a multi-agent approach is presented, which takes advantage of the HD-D algorithm for illuminating different standpoints in a diagnostic reasoning and assessment process, and for reaching a well-founded conclusion. This multi-agent approach is based on the so called Observer and Validating agents. The Observer agents are supported by a deductive inference process and the Validating agents are supported by an abductive inference process. The knowledge bases of these agents are captured by a class of possibilistic logic programs. Hence, these agents are able to deal with qualitative information. The approach is illustrated by a real scenario from diagnosing dementia diseases.
Although a Finite State Machine (FSM) is easy to implement the behaviors of None-Player Characters (NPC) in computer games, it is difficult to maintain and control the behaviors with increasing the number of states. Alternatively, Behavior Tree (BT), which is a tree of hierarchical nodes to control the ow of decision making, is widely used in computer games to address the scalability issues. This paper reviews the structure and semantics of BTs in computer games. Different techniques to automatically learn and build BTs as well as strengths and weaknesses of these techniques are discussed. This paper provides a taxonomy of BT features and shows to what extent these features are taken into account in computer games. Finally, the paper shows how BTs are used in practice in the gaming industry.
For the problem of class-imbalance in the operation monitoring data of wind turbine (WT) pitch connecting bolts, an improved Borderline-SMOTE oversampling method based on “two-step decision” with adaptive selection of synthetic instances (TSDAS-SMOTE) is proposed. Then, TSDAS-SMOTE is combined with XGBoost to construct a WT pitch connection bolt fault detection model. TSDAS-SMOTE generates new samples by “two-step decision making” to avoid the problem of class–class boundary blurring that Borderline-SMOTE tends to cause when oversampling. First, the nearest neighbor sample characteristics are perceived by the fault class samples in the first decision step. If the characteristics of this fault class sample are different from the characteristics of all its nearest neighbor samples, the fault class sample is identified as interference and filtered. Second, the faulty class samples in the boundary zone are extracted as synthetic instances to generate new samples adaptively. Finally, the normal class samples in the boundary zone are used to perceive the unqualified new generated samples in the boundary zone based on the minimum Euclidean distance characteristics, and these unqualified samples are eliminated. For the second step of decision making, since the first step decision removes some of the newly generated samples, the remaining fault class samples without interference samples and boundary zone samples are used as synthetic instances to continue adaptively generating new samples. Thus, a balanced data set with clear class–class boundary zone is obtained, which is then used to train a WT pitch connection bolt fault detection model based on the XGBoost algorithm. The experimental results show that compared with six popular oversampling methods such as Borderline-SMOTE, Cluster-SMOTE, K-means-SMOTE, etc., the fault detection model constructed by the proposed oversampling method is better than the compared fault detection models in terms of missed alarm rate (MAR) and false alarm rate (FAR). Therefore, it can well achieve the fault detection of large WT pitch connection bolts.
A fuzzy preference relation is a popular model to represent both individual and group preferences. However, what is often sought is a subset of alternatives that is an ultimate solution of a decision problem. In order to arrive at such a final solution individal and/or group choice rules may be employed. There is a wealth of such rules devised in the context of the classical, crisp preference relations. Originally, most of the popular group decision making rules were conceived for classical (crisp) preference relations (orderings), and then extended to the case of traditional fuzzy preference relations. Moreover, they often differ in their assumptions about the properties o the preference relations to be processed. In the paper we pursue the path towards a universal representation of such rules that provides an effective generalization of the classical rules for the fuzzy case. Moreover, it leads to a meaningful extension to the linguistic preferences, in the spirit of the computing with words paradigm.
We focus on the problem of constructing decision functions to aid in the valuation of alteratives in uncertain decision making. We discuss different types of scales available for representing the payoffs associated with the alternatives. Here we consider the case in which our basic scale is an ordinal case. Here however we augment this ordinal scale by allowing an additional notion. We indicate one special element on the scale which we call the denoted element. We name such a scale a Denoted Ordinal Scale (DOS). We point out that a DOS allows a binary partitioning of the basic ordinal scale which can be associated with differing semantics and used for various purposes. Here we focus on a binary partitioning and use a semantics which provides a classification of payoffs as to whether they are acceptable or not. This allows us to have information such as A is preferred to B but both are acceptable. We show how a DOS with this semantics can be used to construct sophisticated decision functions.
An assessment of a set of alternatives under certain evaluation criteria has difficulty in dealing with the priority of these alternatives, especially with a lack of precise information in an uncertain environment. Fuzzy numbers are usually applied to represent the imprecise numerical measurements of different alternatives. In this study statistical data are used to derive level (1-α,1-β) interval-valued fuzzy numbers to represent unknown alternative effectiveness scores, after which, by using the compositional rule of inference and signed distance to transform the fuzzy decision making problem into crisp one, one can conveniently obtain the order of these different alternatives and subsequently obtain the best alternative. The approach presented is computationally efficiency, and its underlying concepts are simple and comprehensible. By using this extended generalized method, two cases of an organizational type of rapid-transit-system selection problem are presented as examples to illustrate the applicability of the interval-valued fuzzy numbers and ranking system for decision making. The key contribution of the method is the seamless integration of the statistical data, interval-valued fuzzy number and signed distance to analyze multicriteria decision making problem. The innovation introduced in the model concerns interval-valued fuzzy number which is recognized as a determinant of the effectiveness score in fuzzy relation matrix.
Before implementing a design of a large engineering system different design proposals are evaluated and ranked according to different criteria, such as, safety, cost and technical performance. The experts' knowledge about these criteria is usually vague and/or incomplete, and their nature may be quantitative or qualitative. Therefore the preference modelling for the criteria could imply the use of different types of information such as numerical and/or linguistic (non-homogeneous framework). However, in most of evaluation processes the experts are forced to provide their scores in the same expression domain and in the same scale. The aim of this paper is to propose an evaluation model based on a multi-criteria decision analysis that offers to the experts the possibility of expressing their knowledge in a non-homogeneous evaluation framework, such that the experts can provide their assessments within different domains and scales according to their knowledge and the nature of the criteria. To do so, we propose the use of the fuzzy logic and the fuzzy linguistic approach in order to manage the uncertainty related to the information provided by the experts.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.