Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Armando Sousa received his Ph.D. degrees in the area of Robotics at the University of Porto, Portugal in 2004.
He is currently an Auxiliary Professor in the same faculty and an integrated researcher in the INESCTEC (Institute for Systems and Computer Engineering of Porto - Technology and Science).
He received several international awards in robotic soccer under the RoboCup Federation (mainly in the small size league). He has also received the Pedagogical Excellence award of the UP in year 2015.
His main research interests include education, robotics, data fusion and vision systems. He has co-authored over 50 international peer-reviewed publications and participated in over 10 international projects in the areas of education and robotics.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Armando Sousa
  • Cargo

    Investigador Sénior
  • Desde

    01 junho 2009
007
Publicações

2024

Inspection of Part Placement Within Containers Using Point Cloud Overlap Analysis for an Automotive Production Line

Autores
Costa C.M.; Dias J.; Nascimento R.; Rocha C.; Veiga G.; Sousa A.; Thomas U.; Rocha L.;

Publicação
Lecture Notes in Mechanical Engineering

Abstract
Reliable operation of production lines without unscheduled disruptions is of paramount importance for ensuring the proper operation of automated working cells involving robotic systems. This article addresses the issue of preventing disruptions to an automotive production line that can arise from incorrect placement of aluminum car parts by a human operator in a feeding container with 4 indexing pins for each part. The detection of the misplaced parts is critical for avoiding collisions between the containers and a high pressure washing machine and also to avoid collisions between the parts and a robotic arm that is feeding parts to a air leakage inspection machine. The proposed inspection system relies on a 3D sensor for scanning the parts inside a container and then estimates the 6 DoF pose of the container followed by an analysis of the overlap percentage between each part reference point cloud and the 3D sensor data. When the overlap percentage is below a given threshold, the part is considered as misplaced and the operator is alerted to fix the part placement in the container. The deployment of the inspection system on an automotive production line for 22 weeks has shown promising results by avoiding 18 hours of disruptions, since it detected 407 containers having misplaced parts in 4524 inspections, from which 12 were false negatives, while no false positives were reported, which allowed the elimination of disruptions to the production line at the cost of manual reinspection of 0.27% of false negative containers by the operator.

2024

Assessing Soil Ripping Depth for Precision Forestry with a Cost-Effective Contactless Sensing System

Autores
da Silva, DQ; Louro, F; dos Santos, FN; Filipe, V; Sousa, AJ; Cunha, M; Carvalho, JL;

Publicação
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2

Abstract
Forest soil ripping is a practice that involves revolving the soil in a forest area to prepare it for planting or sowing operations. Advanced sensing systems may help in this kind of forestry operation to assure ideal ripping depth and intensity, as these are important aspects that have potential to minimise the environmental impact of forest soil ripping. In this work, a cost-effective contactless system - capable of detecting and mapping soil ripping depth in real-time - was developed and tested in laboratory and in a realistic forest scenario. The proposed system integrates two single-point LiDARs and a GNSS sensor. To evaluate the system, ground-truth data was manually collected on the field during the operation of the machine with a ripping implement. The proposed solution was tested in real conditions, and the results showed that the ripping depth was estimated with minimal error. The accuracy and mapping ripping depth ability of the low-cost sensor justify their use to support improved soil preparation with machines or robots toward sustainable forest industry.

2024

An Educational Kit for Simulated Robot Learning in ROS 2

Autores
Almeida, F; Leao, G; Sousa, A;

Publicação
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2

Abstract
Robot Learning is one of the most important areas in Robotics and its relevance has only been increasing. The Robot Operating System (ROS) has been one of the most used architectures in Robotics but learning it is not a simple task. Additionally, ROS 1 is reaching its end-of-life and a lot of users are yet to make the transition to ROS 2. Reinforcement Learning (RL) and Robotics are rarely taught together, creating greater demand for tools to teach all these components. This paper aims to develop a learning kit that can be used to teach Robot Learning to students with different levels of expertise in Robotics. This kit works with the Flatland simulator using open-source free software, namely the OpenAI Gym and Stable-Baselines3 packages, and contains tutorials that introduce the user to the simulation environment as well as how to use RL to train the robot to perform different tasks. User tests were conducted to better understand how the kit performs, showing very positive feedback, with most participants agreeing that the kit provided a productive learning experience.

2024

Mission Supervisor for Food Factories Robots

Autores
Moreira, T; Santos, FN; Santos, L; Sarmento, J; Terra, F; Sousa, A;

Publicação
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2

Abstract
Climate change, limited natural resources, and the increase in the world's population impose society to produce food more sustainably, with lower energy and water consumption. The use of robots in agriculture is one of the most promising solutions to change the paradigm of agricultural practices. Agricultural robots should be seen as a way to make jobs easier and lighter, and also a way for people who do not have agricultural skills to produce their food. The PixelCropRobot is a low-cost, open-source robot that can perform the processes of monitoring and watering plants in small gardens. This work proposes a mission supervisor for PixelCropRobot, and general agricultural robots, and presents a prototype of user interface to this mission supervision. The communication between the mission supervisor and the other components of the system is done using ROS2 and MQTT, and mission file standardized. The mission supervisor receives a prescription map, with information about the respective mission, and decomposes them into simple tasks. An A* algorithm then defines the priority of each mission that depends on factors like water requirements, and distance travelled. This concept of mission supervisor was deployed into the PixelCropRobot and was validated in real conditions, showing a enormous potential to be extended to other agricultural robots.

2024

YOLO-Based Tree Trunk Types Multispectral Perception: A Two-Genus Study at Stand-Level for Forestry Inventory Management Purposes

Autores
da Silva, DQ; Dos Santos, FN; Filipe, V; Sousa, AJ; Pires, EJS;

Publicação
IEEE ACCESS

Abstract
Stand-level forest tree species perception and identification are needed for monitoring-related operations, being crucial for better biodiversity and inventory management in forested areas. This paper contributes to this knowledge domain by researching tree trunk types multispectral perception at stand-level. YOLOv5 and YOLOv8 - Convolutional Neural Networks specialized at object detection and segmentation - were trained to detect and segment two tree trunk genus (pine and eucalyptus) using datasets collected in a forest region in Portugal. The dataset comprises only two categories, which correspond to the two tree genus. The datasets were manually annotated for object detection and segmentation with RGB and RGB-NIR images, and are publicly available. The Small variant of YOLOv8 was the best model at detection and segmentation tasks, achieving an F1 measure above 87% and 62%, respectively. The findings of this study suggest that the use of extended spectra, including Visible and Near Infrared, produces superior results. The trained models can be integrated into forest tractors and robots to monitor forest genus across different spectra. This can assist forest managers in controlling their forest stands.

Teses
supervisionadas

2023

RGBD-Based Automatic Stem Selection for Selective Thinning Operations in Forest Context

Autor
Tiago Ferreira Rodrigues

Instituição
UP-FEUP

2023

Formação ética em engenharia com recurso a metodologias ativas: caso de estudo em Engenharia Eletrotécnica

Autor
Maria de Fátima Coelho Monteiro

Instituição
UP-FEUP

2023

Robotic bin picking of flexible entangled tubes

Autor
Gonçalo da Mota Laranjeira Torres Leão

Instituição
UP-FEUP

2023

AI-Based, Real-Time Object Detection in the Public Landscape

Autor
André Vilhena da Costa

Instituição
UP-FEUP

2023

seguimento tridimensional da bola através de sistema de visão de baixo custo para aplicação em desportos praticados em recinto desportivo coberto

Autor
José Carlos Lobinho Gomes

Instituição
UP-FEUP