The immobilization of non-rigid objects is a largely unaddressed subject. We explore the problem by studying the immobilization of a serial chain of polygons connected by rotational joints, or hinges, in a given placement with frictionless point contacts. We show that n + 2 such contacts along edge interiors or at concave vertices suffice to immobilize any serial chain of n ≠ 3 polygons without parallel edges; it remains open whether five contacts can immobilize three hinged polygons. At most n + 3 contacts suffice to immobilize a serial chain of n polygons when the polygons are allowed to have parallel edges. We also study a robust version of immobility, comparable to the classical notion of form closure, which is insensitive to perturbations. The robustness is achieved at the cost of a small increase in the number of contacts: and
frictionless point contacts suffice for a chain of n hinged polygons without and with parallel edges respectively.
The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.
Future application areas for humanoid robots range from the household, to agriculture, to the military, and to the exploration of space. Service applications such as these must address a changing, unstructured environment, a collaboration with human clients, and the integration of manual dexterity and mobility. Control frameworks for service-oriented humanoid robots must, therefore, accommodate many independently challenging issues including: techniques for configuring networks of sensorimotor resources; modeling tasks and constructing behavior in partially observable environments; and integrated control paradigms for mobile manipulators. Our approach advocates actively gathering salient information, modeling the environment, reasoning about solutions to new problems, and coordinating ad hoc interactions between multiple degrees of freedom to do mechanical work. Representations that encode control knowledge are a primary concern. Individual robots must exploit declarative structure for planning and must learn procedural strategies that work in recognizable contexts. We present several pieces of an overall framework in which a robot learns situated policies for control that exploit existing control knowledge and extend its scope. Several examples drawn from the research agenda at the Laboratory for Perceptual Robotics are used to illustrate the ideas.
We describe a process in which the segmentation of objects as well as the extraction of the object shape becomes realized through active exploration of a robot vision system. In the exploration process, two behavioral modules that link robot actions to the visual and haptic perception of objects interact. First, by making use of an object independent grasping mechanism, physical control over potential objects can be gained. Having evaluated the initial grasping mechanism as being successful, a second behavior extracts the object shape by making use of prediction based on the motion induced by the robot. This also leads to the concept of an "object" as a set of features that change predictably over different frames.
The system is equipped with a certain degree of generic prior knowledge about the world in terms of a sophisticated visual feature extraction process in an early cognitive vision system, knowledge about its own embodiment as well as knowledge about geometric relationships such as rigid body motion. This prior knowledge allows the extraction of representations that are semantically richer compared to many other approaches.
A distinct property of robot vision systems is that they are embodied. Visual information is extracted for the purpose of moving in and interacting with the environment. Thus, different types of perception-action cycles need to be implemented and evaluated.
In this paper, we study the problem of designing a vision system for the purpose of object grasping in everyday environments. This vision system is firstly targeted at the interaction with the world through recognition and grasping of objects and secondly at being an interface for the reasoning and planning module to the real world. The latter provides the vision system with a certain task that drives it and defines a specific context, i.e. search for or identify a certain object and analyze it for potential later manipulation. We deal with cases of: (i) known objects, (ii) objects similar to already known objects, and (iii) unknown objects. The perception-action cycle is connected to the reasoning system based on the idea of affordances. All three cases are also related to the state of the art and the terminology in the neuroscientific area.
Human controls dozens of muscles for different hand postures in a coordinated manner. Such coordination is referred to as a postural synergy. Postural synergy has enabled an anthropomorphic robotic hand with many actuators to be applied as a prosthetic hand and controlled by two to three channels of biological signals. Principle component analysis (PCA) of the hand postures has become a popular way to extract the postural synergies. However, relatively big errors are often produced while the hand postures are reconstructed using these PCA-synthesized synergies due to the linearity nature of this method. This paper presents a comparative study in which the postural synergies are synthesized using both linear and nonlinear methods. Specifically, the Gaussian process latent variable model (GPLVM), as a nonlinear dimension reduction method, is implemented to produce nonlinear postural synergies and the hand postures can then be reconstructed from the two-dimensional synergy plane. Computational and experimental verifications show that the posture reconstruction errors are greatly reduced using this nonlinear method. The results suggest that the use of nonlinear postural synergies should be considered while applying a dexterous robotic hand as prosthesis. Versatile hand postures could be formed via only two channels of bio-signals.
Specialized grippers used in the industry are often restricted to specific tasks and objects. However, with the development of dexterous grippers, such as humanoid hands, in-hand pose estimation becomes crucial for successful manipulations, since objects will change their pose during and after the grasping process. In this paper, we present a gripping system and describe a new pose estimation algorithm based on tactile sensory information in combination with haptic rendering models (HRMs). We use a 3-finger manipulator equipped with tactile force sensing elements. A particle filter processes the tactile measurements from these sensor elements to estimate the grasp pose of an object. The algorithm evaluates hypotheses of grasp poses by comparing tactile measurements and expected tactile information from CAD-based haptic renderings, where distance values between the sensor and 3D-model are converted to forces. Our approach compares the force distribution instead of absolute forces or distance values of each taxel. The haptic rendering models of the objects allow us to estimate the pose of soft deformable objects. In comparison to mesh-based approaches, our algorithm reduces the calculation complexity and recognizes ambiguous and geometrically impossible solutions.
The human hand is a complex, highly-articulated system, which has been the source of inspiration in designing humanoid robotic and prosthetic hands. Understanding the functionality of the human hand is crucial for the design, efficient control and transfer of human versatility and dexterity to such anthropomorphic robotic hands. Although research in this area has made significant advances, the synthesis of grasp configurations, based on observed human grasping data, is still an unsolved and challenging task. In this work we derive a novel, constrained autoencoder model, that encodes human grasping data in a compact representation. This representation encodes both the grasp type in a three-dimensional latent space and the object size as an explicit parameter constraint allowing the direct synthesis of object-specific grasps. We train the model on 2250 grasps generated by 15 subjects using 35 diverse objects from the KIT and YCB object sets. In the evaluation we show that the synthesized grasp configurations are human-like and have a high probability of success under pose uncertainty.
Dexterous grasping of a novel object given a single view is an open problem. This paper makes several contributions to its solution. First, we present a simulator for generating and testing dexterous grasps. Second, we present a dataset, generated by this simulator, of 2.4 million simulated dexterous grasps of variations of 294 base objects drawn from 20 categories. Third, we combine an existing approach to learn a grasp generation model with three different learned evaluative models employing ResNet-50 or VGG16 as their visual backbone. Fourth, we train, and evaluate 17 variants of the resulting generative-evaluative architectures on the simulated dataset, showing improvement from 69.53% grasp success rate to 90.49%. Fifth, we present a real robot implementation and evaluate the four most promising variants, executing 196 real robot grasps in total. We show that our best architectural variant achieves a grasp success rate of 87.8% on real novel objects seen from a single view, improving on a baseline of 57.1%. Finally, we explore the inner workings of our best evaluative model and perform an extensive analysis of its results on the simulated dataset.
In this paper, an efficient and low-cost cellphone-commandable mobile manipulation system is described. Aiming at house and elderly caring, this system can be easily commanded through common cellphone network to efficiently grasp objects in household environment, utilizing several low-cost off-the-shelf devices. Unlike the visual servo technology using high quality vision system with high cost, the household-service robot may not afford to such high quality vision servo system, and thus it is essential to use some of low-cost device. However, it is extremely challenging to have the said vision for precise localization, as well as motion control. To tackle this challenge, we developed a realtime vision system with which a reliable grasping algorithm combining machine vision, robotic kinematics and motor control technology is presented. After the target is captured by the arm camera, the arm camera keeps tracking the target while the arm keeps stretching until the end effector reaches the target. However, if the target is not captured by the arm camera, the arm will take a move to help the arm camera capture the target under the guidance of the head camera. This algorithm is implemented on two robot systems: the one with a fixed base and another with a mobile base. The results demonstrated the feasibility and efficiency of the algorithm and system we developed, and the study shown in this paper is of significance in developing a service robot in modern household environment.
This paper explores grasping in robot-aided upper extremity rehabilitation, with a special focus on reaching and grasping exercises and the coordination between load force and grasp force. Six healthy subjects and two hemiparetic subjects performed "pick and place" movements with a haptic robot and virtual environment. These movements were segmented into three phases: grasping, transport and release phase, and the correlation between grasp and load force was calculated over the entire movement and within each phase separately. Results show that the subjects employ same basic mechanism of grasp and load force coordination during a virtual task as in real situations. However, the grasp and load force are partially decoupled due to the nature of the grasping device and the complexity of the task. Furthermore, the coordination is different in different phases and also depends on the level of impairment as well as the level of active support by the rehabilitation robot. The first hemiparetic subject, who can perform reaching movements but cannot open the hand, thus has a lower correlation between grasp force and load force than healthy subjects only in the release phase while the second hemiparetic subject, who has little arm mobility, has a lower correlation in all three phases. Thus, the current work provides basic empirical knowledge that can serve as a basis for future research and for the design of robot-aided reaching and grasping tasks.
Classically, visual attention is assumed to be influenced by visual properties of objects, e.g. as assessed in visual search tasks. However, recent experimental evidence suggests that visual attention is also guided by action-related properties of objects ("affordances"),1,2 e.g. the handle of a cup affords grasping the cup; therefore attention is drawn towards the handle. In a first step towards modelling this interaction between attention and action, we implemented the Selective Attention for Action model (SAAM). The design of SAAM is based on the Selective Attention for Identification model (SAIM).3 For instance, we also followed a soft-constraint satisfaction approach in a connectionist framework. However, SAAM's selection process is guided by locations within objects suitable for grasping them whereas SAIM selects objects based on their visual properties. In order to implement SAAM's selection mechanism two sets of constraints were implemented. The first set of constraints took into account the anatomy of the hand, e.g. maximal possible distances between fingers. The second set of constraints (geometrical constraints) considered suitable contact points on objects by using simple edge detectors. We demonstrate here that SAAM can successfully mimic human behaviour by comparing simulated contact points with experimental data.
The superiority of deformable human fingertips as compared to hard robot gripper fingers for grasping and manipulation has led to a number of investigations with robot hands employing elastomers or materials such as fluids or powders beneath a membrane at the fingertips. It is interesting to study the phenomenon of contact interaction with an object and its manipulation. In this paper bond graph modeling is used to model the contact between a grasped object and two soft fingertips. Detailed bond graph modeling (BGM) of the contact phenomenon with two soft-finger contacts which are placed against each other on the opposite sides of the grasped object as is generally the case in a manufacturing environment is presented. The stability of the object is determined which includes friction between the soft finger contact surfaces and the object, the stiffness of the springs is exploited while achieving the stability in the soft-grasping. The weight of the object coming downward is controlled by the friction between the fingers and the object during the application of contact forces by varying the damping and the stiffness in the soft finger.
This paper describes an approach for real-time preparation of grasping tasks, based on the low-order moments of the target's shape on a stereo pair of images acquired by an active vision head. The objective is to estimate the 3D position and orientation of an object and of the robotic hand, by using computationally fast and independent software components. These measurements are then used for the two phases of a reaching task: (i) an initial phase whereby the robot positions its hand close to the target with an appropriate hand orientation, and (ii) a final phase where a precise hand-to-target positioning is performed using Position-Based Visual Servoing methods.
Todays walking robots are capable of walking in a wide range of terrains. One key feature, especially for exploration of unknown or extraterrestrial areas, is however rarely seen: The ability to grasp and manipulate objects or to pick up samples. In this paper we present a robust, lightweight and very versatile gripper, especially designed to be used on a walking robots leg to enable LAURON V and other robots to use their legs for manipulation tasks.
This paper presents a general framework for an autonomous climbing robot. The objective of the work is to design climbing robot by using robot arm which has the capabilities to grasp the surface, i.e. pole. The robot consists of two arms with wheels attached to each arm enabling the robot to climb or descend a steep surface such as poles or gap between walls. The arms are used for grasping to the surface while the wheels are used for movement upward or downward. The wheels offer friction contacts which allow to apply force on the surface for stable climbing and descending. Fundamental challenges to the development of real robotic systems include the hardware design, control system, grasping and manipulation technique and analysis of the scansorial robot.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.