CONTROL OF EYE AND ARM MOVEMENTS USING ACTIVE, ATTENTIONAL VISION
The experimental research described here was done while the author was visiting the Vision and Robotics Laboratory at the University of Rochester. This research was supported by NSF grant no. IRI-9010899.
Recent related approaches in the areas of vision, motor control and planning are attempting to reduce the computational requirements of each process by restricting the class of problems that can be addressed. Active vision, differential kinematics and reactive planning are all characterized by their minimal use of representations, which simplifies both the required computations and the acquisition of models. This paper describes an approach to visually-guided motor control that is based on active vision and differential kinematics, and is compatible with reactive planning. Active vision depends on an ability to choose a region of the visual environment for task-specific processing. Visual attention provides a mechanism for choosing the region to be processed in a task-specific way. In addition, this attentional mechanism provides the interface between the vision and motor systems by representing visual position information in a 3-D retinocentric coordinate frame. Coordinates in this frame are transformed into eye and arm motor coordinates using kinematic relations expressed differentially. A real-time implementation of these visuomotor mechanisms has been used to develop a number of visually-guided eye and arm movement behaviors.