Please login to be able to save your searches and receive alerts for new content matching your search criteria.
If physicalism is true, everything is physical. In other words, everything supervenes on, or is necessitated by, the physical. Accordingly, if there are logical/mathematical facts, they must be necessitated by the physical facts of the world. The aim of this paper is to clarify what logical/mathematical facts actually are and how these facts can be accommodated in a purely physical world.
The issue of integration in neural networks is intimately connected with that of consciousness. In this paper, integration as an effective level of physical organization is contrasted with a methodological integrative approach. Understanding how consciousness arises out of neural processes requires a model of integration in just causal physical terms. Based on a set of feasible criteria (physical grounding, causal efficacy, no circularity and scaling), a causal account of physical integration for consciousness centered on joint causation is outlined.
Previous work [Chrisley & Sloman, 2016, 2017] has argued that a capacity for certain kinds of meta-knowledge is central to modeling consciousness, especially the recalcitrant aspects of qualia, in computational architectures. After a quick review of that work, this paper presents a novel objection to Frank Jackson’s Knowledge Argument (KA) against physicalism, an objection in which such meta-knowledge also plays a central role. It is first shown that the KA’s supposition of a person, Mary, who is physically omniscient, and yet who has not experienced seeing red, is logically inconsistent, due to the existence of epistemic blindspots for Mary. It is then shown that even if one makes the KA consistent by supposing a more limited physical omniscience for Mary, this revised argument is invalid. This demonstration is achieved via the construction of a physical fact (a recursive conditional epistemic blindspot) that Mary cannot know before she experiences seeing red for the first time, but which she can know afterward. After considering and refuting some counter-arguments, the paper closes with a discussion of the implications of this argument for machine consciousness, and vice versa.
This paper explores a philosophical problem at the intersection of neuroscience and artificial intelligence (AI), and the potential impact of these novel AI “mind-reading” technologies on various forms of mind–body dualism, including substance, interaction, property, predicate, and emergent dualisms. It critically examines how AI’s ability to interpret and predict mental states from neural patterns challenges traditional dualistic theories, which have historically posited distinct relationships between the mind and body. The paper analyzes each dualistic theory in the context of AI advancements. Substance and interaction dualisms are scrutinized for their claims of mind–body independence and causal interaction, respectively, in light of AI’s capabilities to correlate mental and physical states. Property dualism’s assertion of unique mental properties emerging from physical processes is tested against AI’s potential to map mental phenomena to brain activity. Predicate dualism’s linguistic and conceptual distinction between mental and physical realms is challenged by AI’s ability to bridge these domains. Similarly, emergent dualism, which views mental states as novel phenomena, confronts the possibility of their reduction to physical brain processes. Despite these challenges, the paper argues for the adaptability of dualistic theories to integrate AI insights, suggesting a re-evaluation rather than a negation of dualism. It highlights the enduring relevance of philosophical inquiry into the nature of consciousness and mind–body relationships in the age of AI, suggesting that such technological advancements invigorate rather than terminate the philosophical debate.