2024
Authors
Martins, JJ; Amaral, A; Dias, A;
Publication
2024 7TH IBERIAN ROBOTICS CONFERENCE, ROBOT 2024
Abstract
Unmanned Aerial Vehicle (UAV) applications, particularly for indoor tasks such as inventory management, infrastructure inspection, and emergency response, are becoming increasingly complex with dynamic environments and their different elements. During operation, the vehicle's response depends on various decisions regarding its surroundings and the task goal. Reinforcement Learning techniques can solve this decision problem by helping build more reactive, adaptive, and efficient navigation operations. This paper presents a framework to simulate the navigation of a UAV in an operational environment, training and testing it with reinforcement learning models for further deployment in the real drone. With the support of the 3D simulator Gazebo and the framework Robot Operating System (ROS), we developed a training environment conceivably simple and fast or more complex and dynamic, explicit as the real-world scenario. The multi-environment simulation runs in parallel with the Deep Reinforcement Learning (DRL) algorithm to provide feedback for the training. TD3, DDPG, PPO, and PPO+LSTM were trained to validate the framework training, testing, and deployment in an indoor scenario.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.