Please login to be able to save your searches and receive alerts for new content matching your search criteria.
We are fascinated by the idea of giving life to the inanimate. The fields of Artificial Life and Artificial Intelligence (AI) attempt to use a scientific approach to pursue this desire. The first steps on this approach hark back to Turing and his suggestion of an imitation game as an alternative answer to the question "can machines think?".1 To test his hypothesis, Turing formulated the Turing test1 to detect human behavior in computers. But how do humans pass such a test? What would you say if you would learn that they do not pass it well? What would it mean for our understanding of human behavior? What would it mean for our design of tests of the success of artificial life? We report below an experiment in which men consistently failed the Turing test.
GPT-4 (Generative Pre-Trained Transformer 4) is often heralded as a leading commercial AI offering, sparking debates over its potential as a steppingstone toward Artificial General Intelligence. But does it possess consciousness? This paper investigates this key question using the nine qualitative measurements of the Building Blocks theory. GPT-4’s design, architecture, and implementation are compared to each of the building blocks of consciousness to determine whether it has achieved the requisite milestones to be classified as conscious or, if not, how close to consciousness GPT-4 is. Our assessment is that, while GPT-4 in its native configuration is not currently conscious, current technological research and development is sufficient to modify GPT-4 to have all the building blocks of consciousness. Consequently, we argue that the emergence of a conscious AI model is plausible in the near term. The paper concludes with a comprehensive discussion of the ethical implications and societal ramifications of engineering conscious AI entities.