Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Gonçalo Leão

2019

Using simulation games for traffic model calibration

Autores
Leão, G; Ferreira, J; Amaro, P; Rossetti, RJF;

Publicação
17th International Industrial Simulation Conference 2019, ISC 2019

Abstract
Microscopic simulation requires accurate car-following models so that they can properly emulate real-world traffic. In order to define these models, calibration procedures can be used. The main problem with reliable calibration methods is their high cost, either in terms of the time they need to produce a model or due to high resource requirements. In this paper, we examine a method based on virtual driving simulation to calibrate the Krauß car-following model by coupling the Unity 3D game engine with SUMO. In addition, we present a means based on the fundamental diagrams of traffic flow for validating the instances of the model obtained from the calibration. The results show that our method is capable of producing instances with parameters close to those found in the literature. We conclude that this method is a promising, cost-efficient calibration technique for the Krauß model. Further investigation will be required to define a more general approach to calibrate a broader range of car-following models and to improve their accuracy. © 2019 EUROSIS-ETI.

2023

Using Deep Reinforcement Learning for Navigation in Simulated Hallways

Autores
Leao, G; Almeida, F; Trigo, E; Ferreira, H; Sousa, A; Reis, LP;

Publicação
2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC

Abstract
Reinforcement Learning (RL) is a well-suited paradigm to train robots since it does not require any previous information or database to train an agent. This paper explores using Deep Reinforcement Learning (DRL) to train a robot to navigate in maps containing different sorts of obstacles and which emulate hallways. Training and testing were performed using the Flatland 2D simulator and a Deep Q-Network (DQN) provided by OpenAI gym. Different sets of maps were used for training and testing. The experiments illustrate how well the robot is able to navigate in maps distinct from the ones used for training by learning new behaviours (namely following walls) and highlight the key challenges when solving this task using DRL, including the appropriate definition of the state space and reward function, as well as of the stopping criteria during training.

2024

An Educational Kit for Simulated Robot Learning in ROS 2

Autores
Almeida, F; Leao, G; Sousa, A;

Publicação
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2

Abstract
Robot Learning is one of the most important areas in Robotics and its relevance has only been increasing. The Robot Operating System (ROS) has been one of the most used architectures in Robotics but learning it is not a simple task. Additionally, ROS 1 is reaching its end-of-life and a lot of users are yet to make the transition to ROS 2. Reinforcement Learning (RL) and Robotics are rarely taught together, creating greater demand for tools to teach all these components. This paper aims to develop a learning kit that can be used to teach Robot Learning to students with different levels of expertise in Robotics. This kit works with the Flatland simulator using open-source free software, namely the OpenAI Gym and Stable-Baselines3 packages, and contains tutorials that introduce the user to the simulation environment as well as how to use RL to train the robot to perform different tasks. User tests were conducted to better understand how the kit performs, showing very positive feedback, with most participants agreeing that the kit provided a productive learning experience.

2024

Multi-Agent Reinforcement Learning for Side-by-Side Navigation of Autonomous Wheelchairs

Autores
Fonseca, T; Leao, G; Ferreira, LL; Sousa, A; Severino, R; Reis, LP;

Publicação
2024 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC

Abstract
This paper explores the use of Robotics and decentralized Multi-Agent Reinforcement Learning (MARL) for side-by-side navigation in Intelligent Wheelchairs (IW). Evolving from a previous work approach using traditional single-agent methodologies, it adopts a Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm to provide control input and enable a pair of IW to be deployed as decentralized computing agents in real-world environments, discarding the need to rely on communication between each other. In this study, the Flatland 2D simulator, in conjunction with the Robot Operating System (ROS), is used as a realistic environment to train and test the navigation algorithm. An overhaul of the reward function is introduced, which now provides individual rewards for each agent and revised reward incentives. Additionally, the logic for identifying side-by-side navigation was improved, to encourage dynamic alignment control. The preliminary results outline a promising research direction, with the IWs learning to navigate in various realistic hallways testing scenarios. The outcome also suggests that while the MADDPG approach holds potential over single-agent techniques for the decentralized IW robotics application, further investigation are needed for real-world deployment.

2024

Hierarchical Reinforcement Learning and Evolution Strategies for Cooperative Robotic Soccer

Autores
Santos, B; Cardoso, A; Ledo, G; Reis, LP; Sousa, A;

Publicação
2024 7TH IBERIAN ROBOTICS CONFERENCE, ROBOT 2024

Abstract
Artificial I ntelligence ( AI) a nd M achine Learning are frequently used to develop player skills in robotic soccer scenarios. Despite the potential of deep reinforcement learning, its computational demands pose challenges when learning complex behaviors. This work explores less demanding methods, namely Evolution Strategies (ES) and Hierarchical Reinforcement Learning (HRL), for enhancing coordination and cooperation between two agents from the FC Portugal 3D Simulation Soccer Team, in RoboCup. The goal is for two robots to learn a high-level skill that enables a robot to pass the ball to its teammate as quickly as possible. Results show that the trained models under-performed in a traditional robotic soccer two-agent task and scored perfectly in a much simpler one. Therefore, this work highlights that while these alternative methods can learn trivial cooperative behavior, more complex tasks are difficult t o learn.

2024

Using Deep Learning for 2D Primitive Perception with a Noisy Robotic LiDAR

Autores
Brito, A; Sousa, P; Couto, A; Leao, G; Reis, LP; Sousa, A;

Publicação
2024 7TH IBERIAN ROBOTICS CONFERENCE, ROBOT 2024

Abstract
Effective navigation in mobile robotics relies on precise environmental mapping, including the detection of complex objects as geometric primitives. This work introduces a deep learning model that determines the pose, type, and dimensions of 2D primitives using a mobile robot equipped with a noisy LiDAR sensor. Simulated experiments conducted in Webots involved randomly placed primitives, with the robot capturing point clouds which were used to progressively build a map of the environment. Two mapping techniques were considered, a deterministic and probabilistic (Bayesian) mapping, and different levels of noise for the LiDAR were compared. The maps were used as input to a YOLOv5 network that detected the position and type of the primitives. A cropped image of each primitive was then fed to a Convolutional Neural Network (CNN) that determined the dimensions and orientation of a given primitive. Results show that the primitive classification achieved an accuracy of 95% in low noise, dropping to 85% under higher noise conditions, while the prediction of the shapes' dimensions had error rates from 5% to 12%, as the noise increased. The probabilistic mapping approach improved accuracy by 10-15% compared to deterministic methods, showcasing robustness to noise levels up to 0.1. Therefore, these findings highlight the effectiveness of probabilistic mapping in enhancing detection accuracy for mobile robot perception in noisy environments.

  • 2
  • 2