Please login to be able to save your searches and receive alerts for new content matching your search criteria.
We describe a part of the stimulus sentences of a German language processing ERP experiment using a context-free grammar and represent different processing preferences by its unambiguous partitions. The processing is modeled by deterministic pushdown automata. Using a theorem proven by Moore, we map these automata onto discrete time dynamical systems acting at the unit square, where the processing preferences are represented by a control parameter. The actual states of the automata are rectangles lying in the unit square that can be interpreted as cylinder sets in the context of symbolic dynamics theory. We show that applying a wrong processing preference to a certain input string leads to an unwanted invariant set in the parsers dynamics. Then, syntactic reanalysis and repair can be modeled by a switching of the control parameter — in analogy to phase transitions observed in brain dynamics. We argue that ERP components are indicators of these bifurcations and propose an ERP-like measure of the parsing model.
Joint attention is a communicative activity that allows social partners to share perceptual experiences by jointly attending to an environmental object. Unlike the common approach towards joint attention, which is based on the developmental view in robotics, here it is conceptualized with a psychophysical paradigm known as cueing. The triadic interaction of joint attention is formalized as the conditional probability of an attentional response for a given target candidate derived from object features and a cue derived from a human partner's indication. A robotic system to which the joint attention model is applied conducted a series of tasks to demonstrate the properties of the computational model. The robotic system successfully performed the tasks, which could not be specified by the information derived from a target object alone; furthermore, the system demonstrated how perceptual and selection ambiguity is resolved through joint attentive interaction and made to converge into a common perceptual state. The results imply that a perceptual common ground is constructed on the triadic relationship between user, robot, and objects through joint attentive interaction.