World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

Path Planning for Mobile Robots Using Transfer Reinforcement Learning

    https://doi.org/10.1142/S0218213024400050Cited by:0 (Source: Crossref)
    This article is part of the issue:

    The path planning of mobile robots helps robots to perceive environment using the information obtained from sensors and plan a route to reach the target. With the increasing difficulty of task, the environment the mobile robots face becomes more and more complex. Traditional path planning methods can no longer meet the requirements of mobile robot navigation in complex environment. Deep reinforcement learning (DRL) is introduced into robot navigation However, it may be time-consuming to train DRL model when the environment is very complex and the existing environment may differ from the unknown environment. In order to handle the robot navigation in heterogeneous environment, this paper utilizes deep transfer reinforcement learning (DTRL) for mobile robot path planning. Compared with DRL, DTRL does not require the distribution of the existing environment is the same as that of the unknown environment. Additionally, DTRL can transfer the knowledge of existing model to new scenario to reduce the training time. The simulations show that DTRL can reach higher success rate than DRL for heterogeneous environment robot navigation. By using local policy, it costs less time to train DTRL than DRL for a complex environment and DTRL can consume less navigation time.