Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The purpose of this paper is to give a brief demonstration on how deontic logic can be used to help the design of robots capable of choice, equipped with artificial intelligence, by providing a framework that will help maintain ethically sound behavior. We begin by presenting an overview of the potential applications of robots and the expansion of their use in various areas of our society as well as the ethical concerns artificial intelligence and robots raise. Then, we give a quick introduction to deontic logic, highlighting its key concepts and explaining what it offers to the field of ethics. In the third part of the paper, we present our own approach to deontic logic, based on common sense reasoning. The fourth part includes three short applications of our common sense deontic logic approach in the field of artificial intelligence and robotics. These applications illustrate how deontic logic can be used to guide robots in making morally sound decisions, using examples from the health sector. In the final section, we have the conclusions of our paper as well as our limitations and plans for future research.
New Polio Vaccine Delayed.
How Much One Needs to Know about One's Food?
Singapore: A Thriving Biomedical Hub.
Balancing Science and Ethics.
Latest Developments in Gene Therapy.
The Status and Promise of Cancer Gene Therapy.
Beauty and Beast: The Promises and Concerns of Gene Therapy Vectors.
How Safe is Gene Therapy?
Gene Therapy of Cancer.
This article reports on the recent developments to the regulation of human cloning and genetic testing and research in Singapore.
The Human Biomedical Research Act: Overview and International Comparisons.
Tissue Banking in Singapore – An Evolving Enterprise.
After Ebola, Social Justice as a Base for a Biobanking Governance Framework.
Community Engagement for Biobanking Research: Perspectives from Africa.
With the advancements in nanotechnology, the interaction between nanotechnology, society and environment has increased. Nanotherapeutics and nanopharmaceuticals have allowed facilitation of earlier and more precise diagnosis, reduced side effects, improved targeted therapies and efficacy of drugs. Likewise, nanotechnology has helped in improving the quality of environment by solving issues of air pollution, water remediation and waste management with the help of nanoproducts such as nanofilters, nanophoto catalysts, nanoadsorbents and nanosensors. Moreover, nanopesticides, nanofood, anti-bacterial nanopackaging, nanofertilizers and many other products have helped food and agriculture sector to grow. There are innumerable products on shelf based on nanotechnology impacting almost every sector. However, nanotechnology like any other technology if used unchecked and unregulated can be a cause of social, environmental, legal and ethical concerns. Therefore, it is crucial to consider these challenges in addition to the promises and opportunities nanotechnology has to offer. This review has highlighted the immense importance of nanotechnology by discussing its applications especially in medicine, environmental sciences and food and agriculture sectors. Closely studying these aspects will allow us to discover gaps, obstacles and potential solutions for responsible nanotechnology development and deployment. Understanding these concerns and challenges is also critical for policymakers, researchers, industrialists and society as a whole in order to promote ethical practices and informed decision-making. This review will help to contribute to the continuing discourse and raise ethical awareness in the field of nanotechnology thence minimizing the harm while maximizing the benefits.
Researchers and scientists face globally, and parallel to their core research activities, increased pressure to successfully lead or participate in fundraising activities. The field has been experiencing fierce competition with success rates of proposals falling dramatically down, while the complexity of the funding instruments and the need for acquiring a wide understanding of issues related to impacts, research priorities in connection to wider national and transnational (e.g. EU-wide) policy aspects, increase discomfort levels for the individual researchers and scientists. In this paper, we suggest the use of transdisciplinary AI tools to support (semi-)- automation of several steps of the application and proposal preparation processes.
What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a computational and conceptual model of human cognition, will help illustrate how consciousness can be understood to serve a very broad role in the making of all decisions including moral decisions.
In this paper, the notion of super-intelligence (or "AI++", as Chalmers has termed it) is considered in the context of machine consciousness (MC) research. Suppose AI++ were to come about, would real MC have then also arrived, "for free"? (I call this the "drop-out question".) Does the idea tempt you, as an MC investigator? What are the various positions that might be adopted on the issue of whether an AI++ would necessarily (or with strong likelihood) be a conscious AI++? Would a conscious super-intelligence also be a super-consciousness? (Indeed, what meaning might be attached to the notion of "super-consciousness"?) What ethical and social consequences might be drawn from the idea of conscious super-AIs or from that of artificial super-consciousness? And what implications does this issue have for technical progress on MC in a pre-AI++ world? These and other questions are considered.
Artificial intelligence, the "science and engineering of intelligent machines", still has yet to create even a simple "Advice Taker" [McCarthy, 1959]. We have previously argued [Waser, 2011] that this is because researchers are focused on problem-solving or the rigorous analysis of intelligence (or arguments about consciousness) rather than the creation of a "self" that can "learn" to be intelligent. Therefore, following expert advice on the nature of self [Llinas, 2001; Hofstadter, 2007; Damasio, 2010], we embarked upon an effort to design and implement a self-understanding, self-improving loop as the totality of a (seed) AI. As part of that, we decided to follow up on Richard Dawkins' [1976] speculation that "perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself" by defining a number of axioms and following them through to their logical conclusions. The results combined with an enactive approach yielded many surprising and useful implications for further understanding consciousness, self, and "free-will" that continue to pave the way towards the creation of safe/moral autopoiesis.
To deliver on the 2030 Agenda and the seventeen development goals, while facing complex health challenges, we need research and education that extend across multiple scientific fields. This will enable researchers from a variety of disciplines to meet, identify research issues, apply for funding, and conduct interdisciplinary research. In addition, student involvement is key in achieving the 2030 Agenda’s global goals – and beyond. Challenges include, climate change and child health, non-peaceful societies, gender inequalities and health.
The Swedish Institute for Global Health Transformation (SIGHT) was founded in 2017 at the Royal Swedish Academy of Sciences with the support of the Bill & Melinda Gates Foundation. SIGHT’s mission is to promote an interdisciplinary approach in research and education in the field of global health. In order to deliver on the commitment to global health among researchers and students in various scientific fields and at universities and colleges across Sweden, SIGHT has established SIGHT Fellows, a mentoring programme for academic researchers. In collaboration with universities, established research institutions, and other stakeholders, SIGHT Student Network holds dynamic meetings for students from a variety of disciplines and universities to contribute to delivering the UN’s sustainability goals.