Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In this work we show how precomputed reachability information can be used to efficiently solve complex inverse kinematics (IK) problems such as bimanual grasping or re-grasping for humanoid robots. We present an integrated approach which generates collision-free IK solutions in cluttered environments while handling multiple potential grasping configurations for an object. Therefore, the spatial reachability of the robot's workspace is efficiently encoded by discretized data structures and sampling-based techniques are used to handle arbitrary kinematic chains. The algorithms are employed for single-handed and bimanual grasping tasks with fixed robot base position and methods are developed that allow to efficiently incorporate the search for suitable robot locations. The approach is evaluated in different scenarios with the humanoid robot ARMAR-III.
Humanoid robots that have to operate in cluttered and unstructured environments, such as man-made and natural disaster scenarios, require sophisticated sensorimotor capabilities. A crucial prerequisite for the successful execution of whole-body locomotion and manipulation tasks in such environments is the perception of the environment and the extraction of associated environmental affordances, i.e., the action possibilities of the robot in the environment. We believe that such a coupling between perception and action could be a key to substantially increase the flexibility of humanoid robots.
In this paper, we approach the affordance-based generation of whole-body actions for stable locomotion and manipulation. We incorporate a rule-based system to assign affordance hypotheses to visually perceived environmental primitives in the scene. These hypotheses are then filtered using extended reachability maps that carry stability information, for identifying reachable affordance hypotheses. We then formulate the hypotheses in terms of a constrained inverse kinematics problem in order to find whole-body configurations that utilize a chosen set of hypotheses.
The proposed methods are implemented and tested in simulated environments based on RGB-D scans as well as on a real robotic platform.
Active tactile perception is a powerful mechanism to collect contact information by touching an unknown object with a robot finger in order to enable further interaction with the object or grasping of the object. The acquired object knowledge can be used to build object shape models based on such usually sparse tactile contact information. In this paper, we address the problem of object shape reconstruction from sparse tactile data gained from a robot finger that yields contact information and surface orientation at the contact points. To this end, we present an exploration algorithm which determines the next best touch target in order to maximize the estimated information gain and to minimize the expected costs of exploration actions. We introduce the Information Gain Estimation Function (IGEF), which combines different goals as measure for the quantification of the cost-aware information gain during exploration. The IGEF-based exploration strategy is validated in simulation using 48 publicly available object models and compared to state-of-the-art Gaussian processes-based exploration approaches. The results show the performance of the approach regarding exploration efficiency, cost-awareness and suitability for application in real tactile sensing scenarios.