Caruso, M.; Regolin, E.; Camerota Verdù, F.J.; Russo, S.A.; Bortolussi, L.; Seriani, S. Robot Navigation in Crowded Environments: A Reinforcement Learning Approach. Machines2023, 11, 268.
Caruso, M.; Regolin, E.; Camerota Verdù, F.J.; Russo, S.A.; Bortolussi, L.; Seriani, S. Robot Navigation in Crowded Environments: A Reinforcement Learning Approach. Machines 2023, 11, 268.
Caruso, M.; Regolin, E.; Camerota Verdù, F.J.; Russo, S.A.; Bortolussi, L.; Seriani, S. Robot Navigation in Crowded Environments: A Reinforcement Learning Approach. Machines2023, 11, 268.
Caruso, M.; Regolin, E.; Camerota Verdù, F.J.; Russo, S.A.; Bortolussi, L.; Seriani, S. Robot Navigation in Crowded Environments: A Reinforcement Learning Approach. Machines 2023, 11, 268.
Abstract
For a mobile robot, navigation in a densely crowded space can be a challenging and sometimes impossible task, especially with traditional techniques. In this paper, we present a framework to train neural controllers for differential drive mobile robots which must safely navigate a crowded environment while trying to reach a target location. To learn the robot’s policy, we train a convolutional neural network using two reinforcement learning algorithms, Deep Q-Networks (DQN) and Asynchronous Advantage Actor Critic (A3C), and develop a training pipeline that allows to scale the process to several compute nodes. We show that the asynchronous training procedure in A3C can be leveraged to quickly train neural controllers and test them on a real robot in a crowded environment.
Keywords
mobile robotics; neural networks; control systems; reinforcement learning; crowd navigation
Subject
Engineering, Control and Systems Engineering
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.