Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The purpose of this paper is to give a brief demonstration on how deontic logic can be used to help the design of robots capable of choice, equipped with artificial intelligence, by providing a framework that will help maintain ethically sound behavior. We begin by presenting an overview of the potential applications of robots and the expansion of their use in various areas of our society as well as the ethical concerns artificial intelligence and robots raise. Then, we give a quick introduction to deontic logic, highlighting its key concepts and explaining what it offers to the field of ethics. In the third part of the paper, we present our own approach to deontic logic, based on common sense reasoning. The fourth part includes three short applications of our common sense deontic logic approach in the field of artificial intelligence and robotics. These applications illustrate how deontic logic can be used to guide robots in making morally sound decisions, using examples from the health sector. In the final section, we have the conclusions of our paper as well as our limitations and plans for future research.
Researchers and scientists face globally, and parallel to their core research activities, increased pressure to successfully lead or participate in fundraising activities. The field has been experiencing fierce competition with success rates of proposals falling dramatically down, while the complexity of the funding instruments and the need for acquiring a wide understanding of issues related to impacts, research priorities in connection to wider national and transnational (e.g. EU-wide) policy aspects, increase discomfort levels for the individual researchers and scientists. In this paper, we suggest the use of transdisciplinary AI tools to support (semi-)- automation of several steps of the application and proposal preparation processes.
What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a computational and conceptual model of human cognition, will help illustrate how consciousness can be understood to serve a very broad role in the making of all decisions including moral decisions.
In this paper, the notion of super-intelligence (or "AI++", as Chalmers has termed it) is considered in the context of machine consciousness (MC) research. Suppose AI++ were to come about, would real MC have then also arrived, "for free"? (I call this the "drop-out question".) Does the idea tempt you, as an MC investigator? What are the various positions that might be adopted on the issue of whether an AI++ would necessarily (or with strong likelihood) be a conscious AI++? Would a conscious super-intelligence also be a super-consciousness? (Indeed, what meaning might be attached to the notion of "super-consciousness"?) What ethical and social consequences might be drawn from the idea of conscious super-AIs or from that of artificial super-consciousness? And what implications does this issue have for technical progress on MC in a pre-AI++ world? These and other questions are considered.
Artificial intelligence, the "science and engineering of intelligent machines", still has yet to create even a simple "Advice Taker" [McCarthy, 1959]. We have previously argued [Waser, 2011] that this is because researchers are focused on problem-solving or the rigorous analysis of intelligence (or arguments about consciousness) rather than the creation of a "self" that can "learn" to be intelligent. Therefore, following expert advice on the nature of self [Llinas, 2001; Hofstadter, 2007; Damasio, 2010], we embarked upon an effort to design and implement a self-understanding, self-improving loop as the totality of a (seed) AI. As part of that, we decided to follow up on Richard Dawkins' [1976] speculation that "perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself" by defining a number of axioms and following them through to their logical conclusions. The results combined with an enactive approach yielded many surprising and useful implications for further understanding consciousness, self, and "free-will" that continue to pave the way towards the creation of safe/moral autopoiesis.