Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The inconsistency of judgments in the fuzzy Analytic Hierarchy Process (AHP) is a crucial issue. To make the appropriate decision, the inconsistency in decision maker's (DM) judgments needs to be eliminated or reduced. This paper proposes two mathematical models to deal with inconsistency in fuzzy AHP. In the first model, the DM's judgments are modified where the preference order of the DM's judgments remained unchanged. The second model allows reversing the preference orders of judgments. The proposed models aim to eliminate or reduce the inconsistency of fuzzy AHP by changing judgments. The models cause fewer changes for the high certain judgments. Two examples solved by the proposed models are included for purposes of illustration.
Gower plots provide a powerful graphical tool to detect cardinal and ordinal inconsistencies in a pairwise preference matrix. However, there is no systematical way to help a decision maker to reduce these inconsistencies. This paper develops a model that can assist in making a consistent decision. Gower plots are used to detect major inconsistencies. A multi-objective program is then formulated to adjust both the cardinal inconsistencies and the preference changes subjected to the constraints of ordinal consistency.
Today, knowledge graphs (KGs) are growing by enrichment and refinement methods. The enrichment and refinement can be gained using the correction and completion of the KG. The studies of the KG completion are rich, but less attention has been paid to the methods of the KG error correction. The correction methods are divided into embedding and nonembedding methods. Embedding correction methods have been recently introduced in which a KG is embedded into a vector space. Also, existing correction approaches focused on the recognition of the three types of errors, the outliers, inconsistencies and erroneous relations. One of the challenges is that most outlier correction methods can recognize only numeric outlier entities by nonembedding methods. On the other hand, inconsistency errors are recognized during the knowledge extraction step and existing methods of this field do not pay attention to the recognition of these errors as post-correction by embedding methods. Also, to correct erroneous relations, new embedding techniques have not been used. Since the errors of a KG are variant and there is no method to cover all of them, a new general correction method is proposed in this paper. This method is called correction tower in which these three error types are corrected in three trays. In this correction tower, a new configuration will be suggested to solve the above challenges. For this aim, a new embedding method is proposed for each tray. Finally, the evaluation results show that the proposed correction tower can improve the KG error correction methods and proposed configuration can outperform previous results.
One of the main needs when dealing with multi-perspective specifications is to be able to have at our disposal, at intermediate stages of the development process, a merged view which properly reflects the knowledge of each participant in the elicitation tasks (and over which we can reason, even in the presence of disagreement and incompleteness).
We show in this paper to what extent there can be many merged models, having all of them useful application. So there is not a unique operator which can be qualified as the best; on the contrary, there will be a suitable merging operator depending on the goal of the merging process. More concretely, we will propose a set of four composition operators: ∐max, ∐min, ∐maj and ∐maj+inc. They will be evaluated making use of a list of desired algebraic properties proposed by researchers on merging and which should be held by an ideal merging operator. This analysis can help us to compare the different operators, revealing the key features of each, and identifying weaknesses that may require further research. The conclusion drawn after this analysis points out that these properties are not useful enough to adequately characterize a merging operator. Therefore, new properties will be provided in order to complete the previous list and help to define better the behavior of the different merging operators.
When a Constraint Satisfaction Problem (CSP) admits no solution, it can be useful to pinpoint which constraints are actually contradicting one another and make the problem infeasible. In this paper, a recent heuristic-based approach to compute infeasible minimal subparts of discrete CSPs, also called Minimally Unsatisfiable Cores (MUCs), is improved. The approach is based on the heuristic exploitation of the number of times each constraint has been falsified during previous failed search steps. It appears to enhance the performance of the initial technique, which was the most efficient one until now.
Uncertainty and inconsistency pervade human knowledge. Possibilistic logic, where propositional logic formulas are associated with lower bounds of a necessity measure, handles uncertainty in the setting of possibility theory. Moreover, central in standard possibilistic logic is the notion of inconsistency level of a possibilistic logic base, closely related to the notion of consistency degree of two fuzzy sets introduced by L. A. Zadeh. Formulas whose weight is strictly above this inconsistency level constitute a sub-base free of any inconsistency. However, several extensions, allowing for a paraconsistent form of reasoning, or associating possibilistic logic formulas with information sources or subsets of agents, or extensions involving other possibility theory measures, provide other forms of inconsistency, while enlarging the representation capabilities of possibilistic logic. The paper offers a structured overview of the various forms of inconsistency that can be accommodated in possibilistic logic. This overview echoes the rich representation power of the possibility theory framework.
Multiple official languages within a country along with languages common with other countries demand content consistency in both shared and unshared languages during information sharing. However, inconsistency due to conflict in content shared and content updates not propagated in languages between countries poses a problem. Towards addressing inconsistency, this research qualitatively studied traits for information sharing among countries inside global brands as depicted by content shared in their country-specific websites. First, inconsistency in content shared is illustrated among websites highlighting the problem in information sharing among countries. Second, content propagation among countries that vary in scales and coupling for specific content categories are revealed. Scales suggested that corporate and customer support related information tend to be shared globally and locally respectively while product related information is both locally and regionally suitable for sharing. Higher occurrences of propagation when sharing corporate related information also showed tendency for high coupling between websites suggesting the suitability for rigid consistency policy compared to other categories. This study also proposed a simplistic approach with pattern of sharing to enable consistent information sharing.
Various formulations of smoothed particle hydrodynamics (SPH) have been presented by scientists to overcome inherent numerical difficulties including instabilities and inconsistencies. Low approximation accuracy could cause a result of particle inconsistency in SPH and other meshfree methods. In this study, centroid Voronoi tessellation (CVT) topology optimization is used for rearrangement of particles so that the inconsistency due to irregular particle arrangement can be corrected. Using CVT topology optimization method, the SPH particles, which are generated randomly inside a predetermined domain, are moved to the centroids, i.e., the center of mass of the corresponding Voronoi cells based on Lloyd’s algorithm. The volume associated with each particle is determined by its Voronoi cell. On the other hand, it has been shown that particle methods with stress point integration are more stable than the ones using nodal integration. Conventional SPH approximations only use SPH particles, and it results in the so-called tensile instability. In this paper, a new approach of using stress points is introduced to assist SPH approximations and stabilize the SPH methods.
The focus of this introduction to this special issue is to draw a picture as comprehensive as possible about various dimensions of inconsistency. In particular, we consider: (1) levels of knowledge at which inconsistency occurs; (2) categories and morphologies of inconsistency; (3) causes of inconsistency; (4) circumstances of inconsistency; (5) persistency of inconsistency; (6) consequences of inconsistency; (7) metrics for inconsistency; (8) theories for handling inconsistency; (9) dependencies among occurrences of inconsistency; and (10) problem domains where inconsistency has been studied. The take-home message is that inconsistency is ubiquitous and handling inconsistency is consequential in our endeavors. How to manage and reason in the presence of inconsistency presents a very important issue in semantic computing, cloud computing, social computing, and many other data-rich or knowledge-rich computing systems.
In this paper, we survey recent work on the use of abduction as a knowledge-based reasoning technique for analyzing software specifications. We present a general overview of logical abduction and describe two abductive reasoning techniques, developed from the logic and expert system communities. We then focus on two applications of abduction in software engineering, namely, analysis and revision of specifications. Specifically, we discuss and illustrate, with examples, how the above two abductive reasoning techniques can be deployed to reason about specifications, detect errors, such as logical inconsistencies, provide diagnostic information about these errors, and identify (possible) changes to revise incorrect specifications. We then conclude with a discussion of open research issues.
This paper explores the relationship between object level intuitionistic fuzzy sets and predicate based intuitionistic fuzzy sets. Mass assignment uses a process called semantic unification to evaluate the degree to which one set supports another. Intuitionistic fuzzy sets are mapped onto a mass assignment framework and the mass assignment semantic unification operator is generalised to support both mass assignment and intuitionistic fuzzy sets. Transfer of inconsistent and contradictory evidence is also dealt with. As a consequence, by conjoining the mutual semantic unification of two sets a similarity measure emerges.