Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Armando Sousa

2020

Reinforcement Learning in Navigation and Cooperative Mapping

Autores
Cruz, JA; Cardoso, HL; Reis, LP; Sousa, A;

Publicação
2020 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2020)

Abstract
Reinforcement learning is becoming a more relevant area of research, as it allows robotic agents to learn complex tasks with evaluative feedback. One of the most critical challenges in robotics is the simultaneous localization and mapping problem. We have built a reinforcement learning environment where we trained an agent to control a team of two robots, with the task of cooperatively mapping a common area. Our training process takes the robots' sensors data as input and outputs the control action for each robot. We verified that our agent performed well in a small test environment, with little training, indicating that our approach could be a good starting point for end-to-end reinforcement learning for cooperative mapping.

2020

Controller for Real and Simulated Wheelchair With a Multimodal Interface Using Gazebo and ROS

Autores
Cruz, AB; Sousa, A; Reis, LP;

Publicação
2020 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2020)

Abstract
The evolution of intelligent wheelchairs with new systems to control them and help the user to be more independent has been remarkable in recent years. Since these systems have a significant impact on the quality of life of people with disabilities, it is crucial that it is suited for the final user and does not put his life at risk. Initially, this study proposes a 3D motorised wheelchair model with robotic tools to be used in simulation environments and helps the development and validation of new approaches. This model uses Robotic Operating System (ROS) tools to help the addition of sensors and actuators. With the ROS-Nodes, it is easy to add new features and controllers. The Gazebo framework was used to create the simulation environments. After that, following previous work, it is proposed a wheelchair controller that receives commands from a multimodal interface and can control a real and simulated wheelchair at the same time. This work studies new wheelchair models and their respective controllers in a simulated environment and gradually test in real-world to obtain the final model with low costs and minimise engineering costs.

2020

Vineyard trunk detection using deep learning - An experimental device benchmark

Autores
Pinto de Aguiar, ASP; Neves dos Santos, FBN; Feliz dos Santos, LCF; de Jesus Filipe, VMD; Miranda de Sousa, AJM;

Publicação
COMPUTERS AND ELECTRONICS IN AGRICULTURE

Abstract
Research and development in mobile robotics are continuously growing. The ability of a human-made machine to navigate safely in a given environment is a challenging task. In agricultural environments, robot navigation can achieve high levels of complexity due to the harsh conditions that they present. Thus, the presence of a reliable map where the robot can localize itself is crucial, and feature extraction becomes a vital step of the navigation process. In this work, the feature extraction issue in the vineyard context is solved using Deep Learning to detect high-level features - the vine trunks. An experimental performance benchmark between two devices is performed: NVIDIA's Jetson Nano and Google's USB Accelerator. Several models were retrained and deployed on both devices, using a Transfer Learning approach. Specifically, MobileNets, Inception, and lite version of You Only Look Once are used to detect vine trunks in real-time. The models were retrained in a built in-house dataset, that is publicly available. The training dataset contains approximately 1600 annotated vine trunks in 336 different images. Results show that NVIDIA's Jetson Nano provides compatibility with a wider variety of Deep Learning architectures, while Google's USB Accelerator is limited to a unique family of architectures to perform object detection. On the other hand, the Google device showed an overall Average precision higher than Jetson Nano, with a better runtime performance. The best result obtained in this work was an average precision of 52.98% with a runtime performance of 23.14 ms per image, for MobileNet-V2. Recent experiments showed that the detectors are suitable for the use in the Localization and Mapping context.

2020

Localization and Mapping for Robots in Agriculture and Forestry: A Survey

Autores
Aguiar, AS; dos Santos, FN; Cunha, JB; Sobreira, H; Sousa, AJ;

Publicação
ROBOTICS

Abstract
Research and development of autonomous mobile robotic solutions that can perform several active agricultural tasks (pruning, harvesting, mowing) have been growing. Robots are now used for a variety of tasks such as planting, harvesting, environmental monitoring, supply of water and nutrients, and others. To do so, robots need to be able to perform online localization and, if desired, mapping. The most used approach for localization in agricultural applications is based in standalone Global Navigation Satellite System-based systems. However, in many agricultural and forest environments, satellite signals are unavailable or inaccurate, which leads to the need of advanced solutions independent from these signals. Approaches like simultaneous localization and mapping and visual odometry are the most promising solutions to increase localization reliability and availability. This work leads to the main conclusion that, few methods can achieve simultaneously the desired goals of scalability, availability, and accuracy, due to the challenges imposed by these harsh environments. In the near future, novel contributions to this field are expected that will help one to achieve the desired goals, with the development of more advanced techniques, based on 3D localization, and semantic and topological mapping. In this context, this work proposes an analysis of the current state-of-the-art of localization and mapping approaches in agriculture and forest environments. Additionally, an overview about the available datasets to develop and test these approaches is performed. Finally, a critical analysis of this research field is done, with the characterization of the literature using a variety of metrics.

2020

Navigation Stack for Robots Working in Steep Slope Vineyard

Autores
Santos, LC; de Aguiar, ASP; Santos, FN; Valente, A; Ventura, JB; Sousa, AJ;

Publicação
Intelligent Systems and Applications - Proceedings of the 2020 Intelligent Systems Conference, IntelliSys 2020, London, UK, September 3-4, 2020, Volume 1

Abstract
Agricultural robotics is nowadays a complex, challenging, and relevant research topic for the sustainability of our society. Some agricultural environments present harsh conditions to robotics operability. In the case of steep-slope vineyards, there are several robotic challenges: terrain irregularities, characteristics of illumination, and inaccuracy/unavailability of the Global Navigation Satellite System. Under these conditions, robotics navigation, mapping, and localization become a challenging task. Performing these tasks with safety and accuracy, a reliable and advanced Navigation stack for robots working in a steep slope vineyard is required. This paper presents the integration of several robotic components, path planning aware of robot centre of gravity and terrain slope, occupation grid map extraction from satellite images, a localization and mapping procedure based on high-level visual features reliable under GNSS signals blockage/missing, for steep-slope robots. © 2021, Springer Nature Switzerland AG.

2021

Particle filter refinement based on clustering procedures for high-dimensional localization and mapping systems

Autores
Aguiar, AS; dos Santos, FN; Sobreira, H; Cunha, JB; Sousa, AJ;

Publicação
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
Developing safe autonomous robotic applications for outdoor agricultural environments is a research field that still presents many challenges. Simultaneous Localization and Mapping can be crucial to endow the robot to localize itself with accuracy and, consequently, perform tasks such as crop monitoring and harvesting autonomously. In these environments, the robotic localization and mapping systems usually benefit from the high density of visual features. When using filter-based solutions to localize the robot, such an environment usually uses a high number of particles to perform accurately. These two facts can lead to computationally expensive localization algorithms that are intended to perform in real-time. This work proposes a refinement step to a standard high-dimensional filter based localization solution through the novelty of downsampling the filter using an online clustering algorithm and applying a scan-match procedure to each cluster. Thus, this approach allows scan matchers without high computational cost, even in high dimensional filters. Experiments using real data in an agricultural environment show that this approach improves the Particle Filter performance estimating the robot pose. Additionally, results show that this approach can build a precise 3D reconstruction of agricultural environments using visual scans, i.e., 3D scans with RGB information.

  • 10
  • 24