Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.
In this paper we investigate the logical decidability and undecidability properties of relativity theories. If we include into our theory the whole theory of the reals, then relativity theory still can be decidable. However, if we actually assume the structure of the quantities in our models to be the reals, or at least to be Archimedean, then we get possible predictions in the language of relativity theory which are independent of ZF set theory.
This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 × 5 voxel space gives 10 to 107 solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.
Quantum-dot Cellular Automata (QCA) presents a new model at Nano-scale for possible substitution of conventional Complementary Metal–Oxide–Semiconductor (CMOS) technology. On the other hand, an Arithmetic Logic Unit (ALU) is a digital electronic circuit which performs arithmetic and bitwise logical operations on integer binary numbers. Therefore, QCA-based ALU is an important part of the processor in order to develop a full capability processor. Although the QCA has become very important, there is not any comprehensive and systematic work on studying and analyzing its important techniques in the field of ALU design. This paper provides the comprehensive, systematic and detailed study and survey of the state-of-the-art techniques and mechanisms in the field of QCA-based ALU designing. There are three categories in which QCA plays a role: ALU, logic unit (LU) and arithmetic unit (AU). Each category presents the important studies. In addition, this paper reviews the major developments in these three categories and it plans the new challenges. Furthermore, it provides the identification of open issues and guidelines for future research. Also, a Systematic Literature Review (SLR) on QCA-based ALU, LU and AU is discussed in this paper. We identified 1,960 papers, which are reduced to 26 primary studies through our paper selection process. According to the obtained results from 2001 to 2015, the number of published articles are very high in 2014 and low in 2005 and 2009. This survey paper also provides a discussion of considered mechanisms in terms of ALU, LU and AU attribute as well as directions for future research.
The memristor is a novel circuit element which is capable of maintaining an activity-dependent nonvolatile resistance and is therefore a candidate for use in next-generation storage and logic circuits. In this article, we present a model of the PEO-PANI memristor for use in the SPICE circuit simulation program which is especially suited to analog logic applications. Two variants are presented herein; accompanying each is a short description that explains any design decisions made, as well as elucidating on preferred simulation settings. It is shown that the model accurately replicates corresponding experimental results found in the literature. Simple simulations are used to show the suitability of each variant to specific experimental usage. Appendices contain verbatim implementations of the SPICE models.
Software composition for timely and affordable software development and evolution is one of the oldest pursuits of software engineering. In current software composition techniques, Component-Based Software Development (CBSD) and Aspect-Oriented Software Development (AOSD) have attracted academic and industrial attention. Black box composition used in CBSD provides simple and safe modularization for its strong information hiding, which is, however, the main obstacle for a black box composite to evolve later. This implies that an application developed through black box composition cannot take advantage of Aspect-Oriented Programming (AOP) used in AOSD. On the contrary, AOP enhances maintainability and comprehensibility by modularizing concerns crosscutting multiple components but lacks the support for the hierarchical and external composition of aspects themselves and compromises the important software engineering principles such as encapsulation, which is almost perfectly supported in black box composition. The role and role model have been recognized to have many similarities with CBSD and AOP but have significant differences with those composition techniques as well. Although each composition paradigm has its own advantages and disadvantages, there is no substantial support to realize the synergy of these composition paradigms; the black box composition, AOP, and role model. In this paper, a new composition technique based on representational abstraction of the relationship between component instances is introduced. The model supports the simple, elegant, and dynamic composition of components with its declarative form and provides the hooks through which an aspect can evolve and a parallel developed aspect can be merged at the instance level.
In this paper, we present a logical framework to facilitate users in assessing a software system in terms of the required survivability features. Survivability evaluation is essential in linking foreign software components to an existing system or obtaining software systems from external sources. It is important to make sure that any foreign components/systems will not compromise the current system's survivability properties. Given the increasing large scope and complexity of modern software systems, there is a need for an evaluation framework to accommodate uncertain, vague, or even ill-known knowledge for a robust evaluation based on multi-dimensional criteria. Our framework incorporates user-defined constrains on survivability requirements. Necessity-based possibilistic uncertainty and user survivability requirement constraints are effectively linked to logic reasoning. A proof-of-concept system has been developed to validate the proposed approach. To our best knowledge, our work is the first attempt to incorporate vague, imprecise information into software system survivability evaluation.
Mode analysis in logic programs has been used mainly for code optimization. The mode analysis in this paper supports the program construction process. It is applied to partially complete logic programs. The program construction process is based on schema refinements and refinements by data type operations. Refinements by data type operations are at the end of the refinement process. This mode analysis supports the proper application of refinements by data type operations. In addition, it checks that the declared modes as defined by the Data Type (DT) operations are consistent with the inferred runtime modes. We have implemented an algorithm for mode analysis based on minimal function graphs. An overview of our logic program development method and the denotational semantics of the analysis framework are presented in this paper.
There have been a large number of systems that integrate logic and objects (frames or classes) for knowledge representation and reasoning. Most of those systems give pre-eminence to logic and their objects lack the structure of frames. These choices imply a number of disadvantages, as the inability to represent exceptions and perform default reasoning, and the reduction in the naturalness of representation. In this paper, aspects of knowledge representation and reasoning in SILO, a system integrating logic in objects, are presented. SILO gives pre-eminence to objects. A SILO object comprises elements from both frames and classes. A kind of many-sorted logic is used to express object internal knowledge. Message passing, alongside inheritance, plays a significant role in the reasoning process. Control knowledge, concerning both deduction and inheritance. is separately and explicitly represented via definitions of certain functions, called meta-functions.
The classical notions of continuity and mechanical causality are left in order to reformulate the Quantum Theory starting from two principles: (I) the intrinsic randomness of quantum process at microphysical level, (II) the projective representations of symmetries of the system. The second principle determines the geometry and then a new logic for describing the history of events (Feynman's paths) that modifies the rules of classical probabilistic calculus. The notion of classical trajectory is replaced by a history of spontaneous, random and discontinuous events. So the theory is reduced to determining the probability distribution for such histories accordingly with the symmetries of the system. The representation of the logic in terms of amplitudes leads to Feynman rules and, alternatively, its representation in terms of projectors results in the Schwinger trace formula.
We first describe a metric for uncertain probabilities called opinion, and subsequently a set of logical operators that can be used for logical reasoning with uncertain propositions. This framework which is called subjective logic uses elements from the Dempster-Shafer belief theory and we show that it is compatible with binary logic and probability calculus.
This paper focuses on the decomposition problem of fuzzy relations using the concepts of multiuniverse fuzzy propositional logic. Given two fuzzy propositions in different universes, it is always possible to construct a fuzzy relation in the common universe through a prescribed combination. However, the converse is not so obvious, if possible at all. In other words, given a fuzzy relation, how would we know if it really represents a certain relationship between some fuzzy propositions? It is important to recognize whether the given fuzzy relation is a meaningful representation of information according to certain criteria applicable to some fuzzy propositions that constitute the fuzzy relation itself. Two basic structures of decomposition are investigated. Necessary and sufficient conditions for decomposition of multiuniverse fuzzy truth functions in terms of one-universe truth functions are presented. An algorithm for decomposition is proposed.
Many papers have addressed the task of proposing a set of convenient axioms that a good rule interestingness measure should fulfil. We provide a new study of the principles proposed until now by means of the logic model proposed by Hájek et al.14 In this model association rules can be viewed as general relations of two itemsets quantified by means of a convenient quantifier.28 Moreover, we propose and justify the addition of two new principles to the three proposed by Piatetsky-Shapiro.27 We also use the logic approach for studying the relation between the different classes of quantifiers and these axioms. We define new classes of quantifiers according to the notions of strong and very strong rules, and we present a quantifier based on the certainty factor measure,317 studying its most salient features.
Linking together two directional entropy disequilibriums, NOR functionality can be found. The entropy NOR gate presented is constructed of discrete observations and so is very small, emerging at the earliest stages of complexity. The gate is based on the axiom that an observer increases in entropy as it receives information from what it is observing.
The sharing economy could be said to disrupt who does what in exchanges. This paper categorises the roles played by users, providers, and platforms in different interpretations of the sharing economy. It asks: What different roles do the users, providers, and platforms play in the sharing economy? And: How do the roles differ in various interpretations of the sharing economy? The paper classifies the different interpretations based on their market/non-market logic and concludes that roles are more extensive for users and providers in non-market logic interpretations, while market logic suggests that the platform acts more roles. The user is, despite the peer-to-peer connotation of the sharing economy, often quite passive. Contributions are made to the emerging literature on the sharing economy through highlighting its many different interpretations, where roles help to systematise these. The paper furthermore contributes to the literature on roles through highlighting them as transitory and expanding beyond expectations related to digitalisation. Practically, the systematisation of roles helps to navigate among various business model designs and makes informed decisions when launching platforms in the sharing economy. Additionally, the focus on roles raises important questions on risk sharing, resource provisions, and the creation of value for each participating party.
Existential Ω-entailment is a paraconsistent entailment relation designed to show the consequences of data which is inconsistent with a set of integrity constraints Ω. In this paper, we prove semantic properties of existential Ω-entailment and give an algorithm for computing it.
Representing knowledge in a rule-based system takes place by means of "if…then…" statements. These are called production rules for the reason that new information is produced when the rule fires. The logic attached to rule-based systems is taken to be classical inasmuch as "if…then…" is encoded by material implication. However, it appears that the notion of triggering "if…then…" amounts to different logical definitions. The paper investigates the matter, with an emphasis upon consistency because reading "if… then…" statements as rules calls for a notion of rule consistency that does not conform with consistency in the classical sense. Natural deduction is used to explore entailment and equivalence among various formulations and properties.
Intelligence can be understood as a form of rationality, in the sense that an intelligent system does its best when its knowledge and resources are insufficient with respect to the problems to be solved. The traditional models of rationality typically assume some form of sufficiency of knowledge and resources, so cannot solve many theoretical and practical problems in Artificial Intelligence (AI). New models based on the Assumption of Insufficient Knowledge and Resources (AIKR) cannot be obtained by minor revisions or extensions of the traditional models, and have to be established fully according to the restrictions and freedoms provided by AIKR. The practice of NARS, an AI project, shows that such new models are feasible and promising in providing a new theoretical foundation for the study of rationality, intelligence, consciousness, and mind.
As the complexity of software systems is ever increasing, so is the need for practical tools for formal verification. Among these are automatic theorem provers, capable of solving various reasoning problems automatically, and proof assistants, capable of deriving more complex results when guided by a mathematician/programmer. In this paper we consider using the latter to build the former. In the proof assistant Isabelle/HOL we combine functional programming and logical program verification to build a theorem prover for propositional logic. We also consider how such a prover can be used to solve a reasoning task without much mental labor. The development is extended with a formalized proof system for writing machine-checked sequent calculus proofs. We consider how this can be used to teach computer science students about logic, automated reasoning and proof assistants.
Guidance ability is one of the typical feature of the novel contradiction separation based automated deduction that extends the canonical resolution rule to a dynamic, flexible multi-clause deduction framework. In order to take better advantage of the guidance ability during the deduction process, we propose in this paper a clause reusing framework for contradiction separation based automated deduction. This framework is able to generate more decision literals, on which the guidance ability of the contradiction separation based automated deduction relies. Technical analysis along with some examples are provided to illustrate the feasibility of the proposed framework.