The current investigation is focused on the development of a novel navigational controller for the optimized path planning and navigation of humanoid robots. The proposed navigational controller works on the principle of adaptive particle swarm optimization. To improve the working pattern of a simple particle swarm optimization controller, some modifications are done to the controlling parameters of the algorithm. The input parameters to the controller are the sensory information in forms of obstacle distances, and the output from the controller is the required turning angle to safely reach the target position by avoiding the obstacles present in the path. By applying the logic of the adaptive particle swarm optimization, humanoid robots are tested in simulation environments. To validate the results, an experimental platform is also developed under laboratory conditions, and a comparison has been performed between the simulation and experimental results. To test the proposed controller in both static and dynamic environments, it is implemented in the navigation of single as well as multiple humanoid robots. Finally, to ensure the efficacy of the proposed controller, it is compared with some of the existing techniques available for navigational purpose.
Handwriting has always been considered an important human task, and accordingly it has attracted the attention of researchers working in biomechanics, physiology, and related fields. There exist a number of studies on this area. This paper considers the human–machine analogy and relates robots with handwriting. The work is two-fold: it improves the knowledge in biomechanics of handwriting, and introduces some new concepts in robot control. The idea is to find the biomechanical principles humans apply when resolving kinematic redundancy, express the principles by means of appropriate mathematical models, and then implement them in robots. This is a step forward in the generation of human-like motion of robots. Two approaches to redundancy resolution are described: (i) "Distributed Positioning" (DP) which is based on a model to represent arm motion in the absence of fatigue, and (ii) the "Robot Fatigue" approach, where robot movements similar to the movements of a human arm under muscle fatigue are generated. Both approaches are applied to a redundant anthropomorphic robot arm performing handwriting. The simulation study includes the issues of legibility and inclination of handwriting. The results demonstrate the suitability and effectiveness of both approaches.
This paper addresses a construction method of a system that realizes whole body reaching motion of humanoids. Humanoids have many redundant degrees of freedom for reaching, and even the base can be moved by making the robot step. Therefore, there are infinite final posture solutions for a final goal position of reaching, and there are also infinite solutions for reaching trajectories that realize a final reaching posture. It is, however, difficult to find an appropriate solution because of the constraint of dynamic balance, and relatively narrow movable range for each joint. We prepared basic postures heuristically, and a final reaching posture is generated by modifying one of them. Heuristics, such as the fact that kneeling down is suitable for reaching near the ground, can be implemented easily by using this method. Methods that compose the reaching system, that is, basic posture selection, modification of postures for generating final reaching postures, balance compensation, footstep planning to realize desired feet position, and generation and execution of whole body motion to final reaching postures are described. Reaching to manually set positions and picking up a bat at various postures using visual information are shown as experiments to show the performance of the system.
This paper elaborates a generalized approach to the modeling of human and humanoid motion. Instead of the usual inductive approach that starts from the analysis of different situations of real motion (like bipedal gait and running; playing tennis, soccer, or volleyball; gymnastics on the floor or using some gymnastic apparatus) and tries to make a generalization, the deductive approach considered begins by formulating a completely general problem and deriving different real situations as special cases. The paper first explains the general methodology. The concept and the software realization are verified by comparing the results with the ones obtained by using "classical" software for one particular well-known problem: biped walk. The applicability and potentials of the proposed method are demonstrated by simulation using a selected example. The simulated motion includes a landing on one foot (after a jump), the impact, a dynamically balanced single-support phase, and overturning (falling down) when the balance is lost. It is shown that the same methodology and the same software can cover all these phases.
A humanoid is a robot that looks like a human (it has human shape, with a trunk, two arms, two legs and a head) and has been specially designed to act like a human being. This paper presents the procedures and results of a series of tests consisting of the reproduction of some target humanoid movements (walking, sitting, standing up, etc.) by a 4-DOF spherical motion base. In this way, an operator seated on the motion base will be able to feel the movements of the humanoid, creating a feeling of teleexistence. This paper contributes to the improvement and development of the teleexistence/telepresence technology applied to humanoid research.
This paper discusses the generation of a running pattern for a humanoid biped and verifies the validity of the proposed method of running pattern generation via experiments. Two running patterns are generated independently in the sagittal plane and in the frontal plane and the two patterns are then combined. When a running pattern is created with resolved momentum control in the sagittal plane, the angular momentum of the robot about the Center of Mass (COM) is set to zero, as the angular momentum causes the robot to rotate. However, this also induces unnatural motion of the upper body of the robot. To solve this problem, the biped was set as a virtual under-actuated robot with a free joint at its support ankle, and a fixed point for a virtual under-actuated system was determined. Following this, a periodic running pattern in the sagittal plane was formulated using the fixed point. The fixed point is easily determined in a numerical approach. In this way, a running pattern in the frontal plane was also generated. In an experiment, a humanoid biped known as KHR-2 ran forward using the proposed running pattern generation method. Its maximum velocity was 2.88 km/h.
Biped robots possess higher capabilities than other mobile robots for moving on uneven environments. However, due to natural postural instability of these robots, their motion planning and control become a more important and challenging task. This article presents a Cartesian approach for gait planning and control of biped robots without the need to use the inverse kinematics and the joint space trajectories, thus the proposed approach could substantially reduce the processing time in both simulation studies and online implementations. It is based on constraining four main points of the robot in Cartesian space. This approach exploits the concept of Transpose Jacobian control as a virtual spring and damper between each of these points and the corresponding desired trajectory, which leads to overcome the redundancy problem. These four points include the tip of right and left foot, the hip joint, and the total center of mass (CM). Furthermore, in controlling biped robots based on desired trajectories in the task space, the system may track the desired trajectory while the knee is broken. This problem is solved here using a PD controller which will be called the Knee Stopper. Similarly, another PD controller is proposed as the Trunk Stopper to limit the trunk motion. Obtained simulation results show that the proposed Cartesian approach can be successfully used in tracking desired trajectories on various surfaces with lower computational effort.
Robust vision in dynamic environments using limited processing power is one of the main challenges in robot vision. This is especially true in the case of biped humanoids that use low-end computers. Techniques such as active vision, context-based vision, and multi-resolution are currently in use to deal with these highly demanding requirements. Thus, having as main motivation the development of robust and high performing robot vision systems, which can operate in dynamic environments, with limited computational resources, we propose a spatiotemporal context integration framework that improves the perceptual capabilities of a given robot vision system. Furthermore, we try to link the vision, tracking, and self-localization problems using a context filter to improve the performance of all these parts together more than to improve them separately. This framework computes: (i) an estimation of the poses of visible and nonvisible objects using Kalman filters; (ii) the spatial coherence of each current detection with all other simultaneous detections and with all tracked objects; and (iii) the spatial coherence of each tracked object with all current detections. Using a Bayesian approach, we calculate the a-posteriori probabilities for each detected and tracked object, which is used in a filtering stage. We choose as a first application of this framework, the detection of static objects in the RoboCup Standard Platform League domain, where Nao humanoid robots are employed. The proposed system is validated in simulations and using real video sequences. In noisy environments, the system is able to decrease largely the number of false detections and to improve effectively the self-localization of the robot.
An original method to build a visual model for unknown objects by a humanoid robot is proposed. The algorithm ensures successful autonomous realization of this goal by addressing the problem as an active coupling between computer vision and whole-body posture generation. The visual model is built through the repeated execution of two processes. The first one considers the current knowledge about the visual aspects and the shape of the object to deduce a preferred viewpoint with the aim of reducing the uncertainty of the shape and appearance of the object. This is done while considering the constraints related to the embodiment of the vision sensors in the humanoid head. The second process generates a whole robot posture using the desired head pose while solving additional constraints such as collision avoidance and joint limitations. The main contribution of our approach relies on the use of different optimization algorithms to find an optimal viewpoint by including the humanoid specificities in terms of constraints, an embedded vision sensor, and redundant motion capabilities. This approach differs significantly from those of traditional works addressing the problem of autonomously building an object model.
We describe the stabilization of a hopping humanoid robot against a disturbance. In the proposed scheme, the method of control is selected according to the size of the disturbance. A posture balance controller is used when the disturbance is small, and the posture balance controller and a foot placement method are activated together when the disturbance is large. A simplified model is used to develop the novel controller for the foot placement method, and a linearized Poincare map for single hopping is made. The control law is designed using the pole placement method. The proposed method is verified through simulation and experiment. In the experiment, HUBO2 hops well against various disturbance.
This paper is aimed at describing a technique to compensate undesired yaw moment, which is inevitably induced about the support foot during single support phases while a bipedal robot is in motion. The main strategy in this method is to rotate the upper body in a way to exert a secondary moment that counteracts to the factors which create the undesired moment. In order to compute the yaw moment by considering all the factors, we utilized Eulerian ZMP Resolution, as it is capable of characterizing the robot's rotational inertia, a crucial component of its dynamics. In doing so, intrinsic angular momentum rate changes are smoothly included in yaw moment equations. Applying the proposed technique, we conducted several bipedal walking experiments using the actual bipedal robot CoMan. As the result, we obtained 61% decrease in undesired yaw moment and 82% regulation in yaw-axis deviation, which satisfactorily verify the efficiency of the proposed approach, in comparison to off-the-shelf techniques.
In this paper, we present Furhat — a back-projected human-like robot head using state-of-the art facial animation. Three experiments are presented where we investigate how the head might facilitate human–robot face-to-face interaction. First, we investigate how the animated lips increase the intelligibility of the spoken output, and compare this to an animated agent presented on a flat screen, as well as to a human face. Second, we investigate the accuracy of the perception of Furhat's gaze in a setting typical for situated interaction, where Furhat and a human are sitting around a table. The accuracy of the perception of Furhat's gaze is measured depending on eye design, head movement and viewing angle. Third, we investigate the turn-taking accuracy of Furhat in a multi-party interactive setting, as compared to an animated agent on a flat screen. We conclude with some observations from a public setting at a museum, where Furhat interacted with thousands of visitors in a multi-party interaction.
Even though many humanoid robots have been developed and they have locomotion ability, their balancing ability is not sufficient. In the future, humanoid robots will work and act within the human environment. At that time, the humanoid robot will be exposed to various disturbances. This paper proposes a balancing strategy for hopping humanoid robots against various magnitude of disturbance. The proposed balancing strategy for a hopping humanoid robot consists of two controllers, the posture balance controller and the landing position controller. The posture balance controller is used for small disturbances, and its role is to maintain stability by controlling the ankle torque of the robot. On the other hand, if disturbance is large, the landing position controller, which changes the landing position of the swing foot, works with the posture balance controller simultaneously. In this way, the landing position controller reduces large disturbances, and the posture balance controller controls the remaining disturbances. The landing position controller is derived by the principle of energy conservation. An experiment conducted with a real humanoid robot, HUBO2, verifies the proposed method. HUBO2 made a stable and continuous hopping action with the proposed balancing strategy overcoming various disturbances placed in the way of the robot.
Humanoid robots have become more and more popular, which is illustrated by the increasing number of available platforms and the huge number of high-quality publications in the research area of navigation and motion planning for humanoids. Recently, a lot of progress has been made in the areas of 3D perception, efficient environment representation, fast collision checking, as well as motion planning for navigation and manipulation with humanoids, also under uncertainty and real-time constraints. All these techniques work well for their independent application scenario, however, currently no system exists that combines the individual approaches. Thus, we are still far from the deployment of a humanoid robot in the real world. The goal of this special issue is to identify gaps in the research directions and to discuss which aspect need to be considered for combining the different approaches so as to enable humanoids to reliably act and navigate in real environments for an extended period of time.
This work presents a method to handle walking on rough terrain using inverse dynamics control and information from a stereo vision system. The ideal trajectories for the center of mass (CoM) and the next position of the feet are given by a pattern generator. The pattern generator is able to automatically find the footsteps for a given direction. Then, an inverse dynamics control scheme relying on a quadratic programming optimization solver is used to let each foot go from its initial to final position, controlling also the CoM and the waist. A 3D model reconstruction of the ground is obtained through the robot cameras located on its head as a stereo vision pair. The model allows the system to know the ground structure where the swinging foot is going to step on. Thus, contact points can be handled to adapt the foot position to the ground conditions.
Recent advances in control of humanoid robots have resulted in bipedal gaits that are dynamically stable on moderately rough terrain but are still limited to a small range of slopes. Humanoid robots, like humans, can take advantage of quadruped gaits to greatly extend this range. Cleverly designed gaits can provide robustness to rough terrain without requiring extensive feedback. In this paper, we present a robust crab-walking framework that includes forward and backward crawling patterns, rotation patterns, and sit-down and recovery sequences. The latter are activated autonomously once the robot detects that it tipped over. The performance and robustness of each locomotion pattern are investigated over a wide range of slopes. Crab-walking is shown to be especially adept at crawling forward on steep downward slopes (up to -54°) and crawling backward on steep upward slopes (up to 18°). Finally, we demonstrate the framework's autonomous capabilities by crossing the rough terrain in DARPA's virtual robotics challenge.
This paper describes a design for a humanoid shoulder complex that replicates human shoulder girdle motion. The goal here is to use the minimum number of actuators to keep the mechanism as light as possible to help ensure that a humanoid is not too top heavy. The human shoulder girdle has two degrees-of-freedom (DOF), which means the minimum number of actuators is also two. The proposed mechanism is a novel parallel platform with two DOF that acts as a pointing mechanism. As the mechanism is articulated the end-effector moves, which results in contraction or elongation, mimicking the natural motion of the human shoulder girdle. A parallel platform was chosen because of the inherent rigidity and a large workspace is not necessary. The mechanism presented here was chosen because of its simplicity and ability to track human shoulder girdle motion. Motion studies were conducted to collect data representing human shoulder girdle motion, which was used to optimize the mechanism for tracking human shoulder girdle motion as closely as possible. A second optimization was performed to ensure that the mechanism avoids singularities throughout its entire range of motion. The results show that this design closely replicates human shoulder girdle motion and is well-suited for use as a humanoid shoulder girdle to increase the range of motion for a humanoid arm.
This paper describes an algorithm that enables a humanoid robot to perform an impulsive pedipulation task on a spherical object in the environment; that is, by using its feet to exert an impulsive force capable of driving the object to a 3D goal position while achieving certain motion characteristics. This is done by planning a suitable motion for the legs of the humanoid, capable to exert the required impact conditions on the spherical object while maintaining the dynamic stability of the robot. As an example of this algorithm implementation we consider the free kick in soccer and take it as a case study. Finally, we provide some simulation and experimental results that intend to show the validity of this algorithm.
Rapid path following is an important component of a layered planning framework to improve motion speed. A method of generating a bipedal footstep sequence that follows a designated path and maintains stability in a planar environment is proposed in this paper. It adopts a walking style with a fixed step frequency and adjusts consecutive strides by eliminating irrational stride changes. An omnidirectional moving vehicle model and the deduced inequalities are introduced to theoretically describe the inter-pace constraints. A modified backtrack search is then implemented to solve the resulting constraint satisfaction problem. Both dynamics simulations and real robot experiments show that a humanoid robot is capable of tracking various paths with rapid paces. Comparison with several alternatives verifies the superiority of this novel method in terms of rapidity.
The aim of this paper is to reduce the energy consumption of a humanoid by analyzing electrical power as input to the robot and mechanical power as output. The analysis considers motor dynamics during standing up and sitting down tasks. The motion tasks of the humanoid are described in terms of joint position, joint velocity, joint acceleration, joint torque, center of mass (CoM) and center of pressure (CoP). To reduce the complexity of the robot analysis, the humanoid is modeled as a planar robot with four links and three joints. The humanoid robot learns to reduce the overall motion torque by applying Q-Learning in a simulated model. The resulting motions are evaluated on a physical NAO humanoid robot during standing up and sitting down tasks and then contrasted to a pre-programmed task in the NAO. The stand up and sit down motions are analyzed for individual joint current usage, power demand, torque, angular velocity, acceleration, CoM and CoP locations. The overall result is improved energy efficiency between 25–30% when compared to the pre-programmed NAO stand up and sit down motion task.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.