Please login to be able to save your searches and receive alerts for new content matching your search criteria.
We report on progress in understanding how to build machines which adaptively acquire the language for their task. The generic mechanism in our research has been an information-theoretic connectionist network embedded in a feedback control system. In this paper, we investigate the capability of such a network to learn associations between messages and meaningful responses to them as a task increases in size and complexity. Specifically, we consider how one might reflect task structure in a network architecture in order to provide improved generalization capability in language acquisition. We propose a method for constructing networks from component subnetworks, namely a product network, which provides improved generalization by factoring the associations between words and actions through an intermediate layer of semantic primitives. A two-dimensional product network was evaluated in a 1000-action data retrieval system, the object of which is to answer questions about 20 attributes of the 50 states of the USA. The system was tested by 13 subjects over a two-week period, during which over 1000 natural language dialogues were recorded. The experiment was conducted using typed input with unconstrained vocabulary and syntax. During the course of performing its task, the system acquired over 500 words and retained 92% of what it learned. We provide a description of the system and details on the experimental results.
Historically cognition was understood as the result of processes occurring solely in the brain. Recently, however, cognitive scientists and philosophers studying "embodied" or "situated" cognition have begun emphasizing the role of the body and environment in which brains are situated, i.e. they view the brain as an "open system". However, these theorists frequently rely on dynamical systems which are traditionally viewed as closed systems. We address this tension by extending the framework of dynamical systems theory. We show how structures which appear in the state space of an embodied agent differ from those that appear in closed systems, and we show how these structures can be used to model representational processes in embodied agents. We focus on neural networks as models of embodied cognition.
Biological systems often offer solutions to difficult problems which are not only original but also efficient. Connectionist models have been inspired by neural systems and successfully applied to the formulation of algorithms for solving complex problems such as the travelling salesman problem. In this paper we extend the connectionist metaphor to include an ethological account of how problems similar to the travelling salesman problem are solved by real living systems. A model is presented in which a population of neural networks with simple sensory-motor systems evolve genetically in simulated environments which represent the problem instances to be solved. Preliminary results are discussed, showing how the ethological metaphor allows to overcome some shortcomings of other connectionist models, such as their time and space complexity.
This paper introduces a connectionist Agent-Based Model (cABM) that incorporates detailed, micro-level understanding of social influence processes derived from laboratory studies and that aims to contextualize these processes in such a way that it becomes possible to model multidirectional, dynamic influences in extended social networks. At the micro-level, agent processes are simulated by recurrent auto-associative networks, an architecture that has a proven ability to simulate a variety of individual psychological and memory processes [D. Van Rooy, F. Van Overwalle, T. Vanhoomissen, C. Labiouse and R. French, Psychol. Rev. 110, 536 (2003)]. At the macro-level, these individual networks are combined into a "community of networks" so that they can exchange their individual information with each other by transmitting information on the same concepts from one net to another. This essentially creates a network structure that reflects a social system in which (a collection of) nodes represent individual agents and the links between agents the mutual social influences that connect them [B. Hazlehurst, and E. Hutchins, Lang. Cogn. Process. 13, 373 (1998)]. The network structure itself is dynamic and shaped by the interactions between the individual agents through simple processes of social adaptation. Through simulations, the cABM generates a number of novel predictions that broadly address three main issues: (1) the consequences of the interaction between multiple sources and targets of social influence (2) the dynamic development of social influence over time and (3) collective and individual opinion trajectories over time. Some of the predictions regarding individual level processes have been tested and confirmed in laboratory experiments. In a extensive research program, data is currently being collected from real groups that will allow validating the predictions of cABM regarding aggregate outcomes.
We report on progress in understanding how to build machines which adaptively acquire the language for their task. The generic mechanism in our research has been an information-theoretic connectionist network embedded in a feedback control system. In this paper, we investigate the capability of such a network to learn associations between messages and meaningful responses to them as a task increases in size and complexity. Specifically, we consider how one might reflect task structure in a network architecture in order to provide improved generalization capability in language acquisition. We propose a method for constructing networks from component subnetworks, namely a product network, which provides improved generalization by factoring the associations between words and actions through an intermediate layer of semantic primitives. A two-dimensional product network was evaluated in a 1000-action data retrieval system, the object of which is to answer questions about 20 attributes of the 50 states of the USA. The system was tested by 13 subjects over a two-week period, during which over 1000 natural language dialogues were recorded. The experiment was conducted using typed input with unconstrained vocabulary and syntax. During the course of performing its task, the system acquired over 500 words and retained 92% of what it learned. We provide a description of the system and details on the experimental results.
In this work we start from the idea that intentionality is the chief characteristic of intelligent behavior, both cognitive and deliberative. Investigating the "originality of intelligent life" from this standpoint means investigating "intentional behavior" in living organisms. In this work, we ask epistemological questions involved in making the intentional behavior the object of physical and mathematical inquiry. We show that the subjective component of intentionality can never become object of scientific inquiry, as related to self–consciousness. On the other hand, the inquiry on objective physical and logical components of intentional acts is central to scientific inquiry. Such inquiry concerns logical and semantic questions, like reference and truth of logical symbols constituted as such, as well as their relationship to the "complexity" of brain networking. These suggestions concern cognitive neuroscience and computability theory, so to constitute one of the most intriguing intellectual challenges of our age. Such metalogical inquiry suggests indeed some hypotheses about the amazing "parallelism", "plasticity" and "storing capacity" that mammalian and ever human brains might exhibit. Such properties, despite neurons are over five orders of magnitude slower than microchips, make biological neural nets much more efficient than artificial ones even in execution of simple cognitive and behavioral tasks.