Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Armando Sousa

2020

Development of an AlphaBot2 Simulator for RPi Camera and Infrared Sensors

Autores
Rafael, A; Santos, C; Duque, D; Fernandes, S; Sousa, A; Reis, LP;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
In recent years robots have been used as a tool for teaching purposes, motivating the development of fully virtual environments for combined real/simulated robotics teaching. The AlphaBot2 Raspberry Pi (RPi), a robot used for education, has no currently available simulator. A Gazebo simulator was produced and a ROS framework was implemented for hardware abstraction and control of low-level modules facilitating students control of the robot's physical behaviours in the real and simulated robot, simultaneously. For the demonstration of the basic model operation, an algorithm for detection of obstacles and lines was implemented for the IR sensors, however, some discrepancies in a line track timed test were detected justifying the need for further work in modelling and performance assessment. Despite that, the implemented ROS structure was verified to be functional in the simulation and the real AlphaBot2 for its motion control, through the input sensors and camera.

2020

Perception of Entangled Tubes for Automated Bin Picking

Autores
Leao, G; Costa, CM; Sousa, A; Veiga, G;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
Bin picking is a challenging problem common to many industries, whose automation will lead to great economic benefits. This paper presents a method for estimating the pose of a set of randomly arranged bent tubes, highly subject to occlusions and entanglement. The approach involves using a depth sensor to obtain a point cloud of the bin. The algorithm begins by filtering the point cloud to remove noise and segmenting it using the surface normals. Tube sections are then modeled as cylinders that are fitted into each segment using RANSAC. Finally, the sections are combined into complete tubes by adopting a greedy heuristic based on the distance between their endpoints. Experimental results with a dataset created with a Zivid sensor show that this method is able to provide estimates with high accuracy for bins with up to ten tubes. Therefore, this solution has the potential of being integrated into fully automated bin picking systems.

2019

FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware

Autores
Aguiar, A; Santos, F; Sousa, AJ; Santos, L;

Publicação
APPLIED SCIENCES-BASEL

Abstract
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detection and ranging (LIDAR), that is optimized to run in a low cost graphical processing unit (GPU). This solution, named FAST-FUSION, proposes to the scientific community three core contributions. The first contribution is an extension to the state-of-the-art monocular visual odometry (Libviso2) to work with omnidirectional cameras and single axis gyro to increase the system accuracy. The second contribution, it is an algorithm that considers low cost LIDAR data to estimate the motion scale and solve the limitations of monocular visual odometer systems. Finally, we propose an heterogeneous computing optimization that considers a Raspberry Pi GPU to improve the visual odometry runtime performance in low cost platforms. To test and evaluate FAST-FUSION, we created three open-source datasets in an outdoor environment. Results shows that FAST-FUSION is acceptable to run in real-time in low cost hardware and that outperforms the original Libviso2 approach in terms of time performance and motion estimation accuracy.

2020

Teaching Mobile Robotics Using the Autonomous Driving Simulator of the Portuguese Robotics Open

Autores
Costa, V; Cebola, P; Tavares, P; Morais, V; Sousa, A;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
Teaching mobile robotics adequately is a complex task. Within the strategies found in the literature, the one used in this work includes the use of a simulator. This simulator represents the Autonomous Driving Competition of the Portuguese Robotics Open. Currently, the simulator supports two different robots and all challenges of the autonomous driving competition. This simulator was used at a Robotics course of the Integrated Master Degree in Informatics and Computing Engineering at the Faculty of Engineering of the University of Porto. In order to study the influence of the simulator in the college students learning process, a survey was conducted. The results and its corresponding analysis indicate that the simulator is suited to teach some of the mobile robotics challenges crossing several fields of study, including image processing, computer vision and control.

2020

Detecting and Solving Tube Entanglement in Bin Picking Operations

Autores
Leao, G; Costa, CM; Sousa, A; Veiga, G;

Publicação
APPLIED SCIENCES-BASEL

Abstract
Featured Application The robotic bin picking solution presented in this work serves as a stepping stone towards the development of cost-effective, scalable systems for handling entangled objects. This study and its experiments focused on tube-shaped objects, which have a widespread presence in the industry. Abstract Manufacturing and production industries are increasingly turning to robots to carry out repetitive picking operations in an efficient manner. This paper focuses on tackling the novel challenge of automating the bin picking process for entangled objects, for which there is very little research. The chosen case study are sets of freely curved tubes, which are prone to occlusions and entanglement. The proposed algorithm builds a representation of the tubes as an ordered list of cylinders and joints using a point cloud acquired by a 3D scanner. This representation enables the detection of occlusions in the tubes. The solution also performs grasp planning and motion planning, by evaluating post-grasp trajectories via simulation using Gazebo and the ODE physics engine. A force/torque sensor is used to determine how many items were picked by a robot gripper and in which direction it should rotate to solve cases of entanglement. Real-life experiments with sets of PVC tubes and rubber radiator hoses showed that the robot was able to pick a single tube on the first try with success rates of 99% and 93%, respectively. This study indicates that using simulation for motion planning is a promising solution to deal with entangled objects.

2020

Visual Trunk Detection Using Transfer Learning and a Deep Learning-Based Coprocessor

Autores
Aguiar, AS; Dos Santos, FN; Miranda De Sousa, AJM; Oliveira, PM; Santos, LC;

Publicação
IEEE ACCESS

Abstract
Agricultural robotics is nowadays a complex, challenging, and exciting research topic. Some agricultural environments present harsh conditions to robotics operability. In the case of steep slope vineyards, there are several challenges: terrain irregularities, characteristics of illumination, and inaccuracy/unavailability of signals emitted by the Global Navigation Satellite System (GNSS). Under these conditions, robotics navigation becomes a challenging task. To perform these tasks safely and accurately, the extraction of reliable features or landmarks from the surrounding environment is crucial. This work intends to solve this issue, performing accurate, cheap, and fast landmark extraction in steep slope vineyard context. To do so, we used a single camera and an Edge Tensor Processing Unit (TPU) provided by Google & x2019;s USB Accelerator as a small, high-performance, and low power unit suitable for image classification, object detection, and semantic segmentation. The proposed approach performs object detection using Deep Learning (DL)-based Neural Network (NN) models on this device to detect vine trunks. To train the models, Transfer Learning (TL) is used on several pre-trained versions of MobileNet V1 and MobileNet V2. A benchmark between the two models and the different pre-trained versions is performed. The models are pre-trained in a built in-house dataset, that is publicly available containing 336 different images with approximately 1,600 annotated vine trunks. There are considered two vineyards, one using camera images with the conventional infrared filter and others with an infrablue filter. Results show that this configuration allows a fast vine trunk detection, with MobileNet V2 being the most accurate retrained detector, achieving an overall Average Precision of 52.98 & x0025;. We briefly compare the proposed approach with the state-of-the-art Tiny YOLO-V3 running on Jetson TX2, showing the outperformance of the adopted system in this work. Additionally, it is also shown that the proposed detectors are suitable for the Localization and Mapping problems.

  • 9
  • 21