Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRIIS

2020

Teaching Mobile Robotics Using the Autonomous Driving Simulator of the Portuguese Robotics Open

Authors
Costa, V; Cebola, P; Tavares, P; Morais, V; Sousa, A;

Publication
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
Teaching mobile robotics adequately is a complex task. Within the strategies found in the literature, the one used in this work includes the use of a simulator. This simulator represents the Autonomous Driving Competition of the Portuguese Robotics Open. Currently, the simulator supports two different robots and all challenges of the autonomous driving competition. This simulator was used at a Robotics course of the Integrated Master Degree in Informatics and Computing Engineering at the Faculty of Engineering of the University of Porto. In order to study the influence of the simulator in the college students learning process, a survey was conducted. The results and its corresponding analysis indicate that the simulator is suited to teach some of the mobile robotics challenges crossing several fields of study, including image processing, computer vision and control.

2020

Detecting and Solving Tube Entanglement in Bin Picking Operations

Authors
Leao, G; Costa, CM; Sousa, A; Veiga, G;

Publication
APPLIED SCIENCES-BASEL

Abstract
Featured Application The robotic bin picking solution presented in this work serves as a stepping stone towards the development of cost-effective, scalable systems for handling entangled objects. This study and its experiments focused on tube-shaped objects, which have a widespread presence in the industry. Abstract Manufacturing and production industries are increasingly turning to robots to carry out repetitive picking operations in an efficient manner. This paper focuses on tackling the novel challenge of automating the bin picking process for entangled objects, for which there is very little research. The chosen case study are sets of freely curved tubes, which are prone to occlusions and entanglement. The proposed algorithm builds a representation of the tubes as an ordered list of cylinders and joints using a point cloud acquired by a 3D scanner. This representation enables the detection of occlusions in the tubes. The solution also performs grasp planning and motion planning, by evaluating post-grasp trajectories via simulation using Gazebo and the ODE physics engine. A force/torque sensor is used to determine how many items were picked by a robot gripper and in which direction it should rotate to solve cases of entanglement. Real-life experiments with sets of PVC tubes and rubber radiator hoses showed that the robot was able to pick a single tube on the first try with success rates of 99% and 93%, respectively. This study indicates that using simulation for motion planning is a promising solution to deal with entangled objects.

2020

Visual Trunk Detection Using Transfer Learning and a Deep Learning-Based Coprocessor

Authors
Aguiar, AS; Dos Santos, FN; Miranda De Sousa, AJM; Oliveira, PM; Santos, LC;

Publication
IEEE ACCESS

Abstract
Agricultural robotics is nowadays a complex, challenging, and exciting research topic. Some agricultural environments present harsh conditions to robotics operability. In the case of steep slope vineyards, there are several challenges: terrain irregularities, characteristics of illumination, and inaccuracy/unavailability of signals emitted by the Global Navigation Satellite System (GNSS). Under these conditions, robotics navigation becomes a challenging task. To perform these tasks safely and accurately, the extraction of reliable features or landmarks from the surrounding environment is crucial. This work intends to solve this issue, performing accurate, cheap, and fast landmark extraction in steep slope vineyard context. To do so, we used a single camera and an Edge Tensor Processing Unit (TPU) provided by Google & x2019;s USB Accelerator as a small, high-performance, and low power unit suitable for image classification, object detection, and semantic segmentation. The proposed approach performs object detection using Deep Learning (DL)-based Neural Network (NN) models on this device to detect vine trunks. To train the models, Transfer Learning (TL) is used on several pre-trained versions of MobileNet V1 and MobileNet V2. A benchmark between the two models and the different pre-trained versions is performed. The models are pre-trained in a built in-house dataset, that is publicly available containing 336 different images with approximately 1,600 annotated vine trunks. There are considered two vineyards, one using camera images with the conventional infrared filter and others with an infrablue filter. Results show that this configuration allows a fast vine trunk detection, with MobileNet V2 being the most accurate retrained detector, achieving an overall Average Precision of 52.98 & x0025;. We briefly compare the proposed approach with the state-of-the-art Tiny YOLO-V3 running on Jetson TX2, showing the outperformance of the adopted system in this work. Additionally, it is also shown that the proposed detectors are suitable for the Localization and Mapping problems.

2020

Reinforcement Learning in Navigation and Cooperative Mapping

Authors
Cruz, JA; Cardoso, HL; Reis, LP; Sousa, A;

Publication
2020 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2020)

Abstract
Reinforcement learning is becoming a more relevant area of research, as it allows robotic agents to learn complex tasks with evaluative feedback. One of the most critical challenges in robotics is the simultaneous localization and mapping problem. We have built a reinforcement learning environment where we trained an agent to control a team of two robots, with the task of cooperatively mapping a common area. Our training process takes the robots' sensors data as input and outputs the control action for each robot. We verified that our agent performed well in a small test environment, with little training, indicating that our approach could be a good starting point for end-to-end reinforcement learning for cooperative mapping.

2020

Controller for Real and Simulated Wheelchair With a Multimodal Interface Using Gazebo and ROS

Authors
Cruz, AB; Sousa, A; Reis, LP;

Publication
2020 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2020)

Abstract
The evolution of intelligent wheelchairs with new systems to control them and help the user to be more independent has been remarkable in recent years. Since these systems have a significant impact on the quality of life of people with disabilities, it is crucial that it is suited for the final user and does not put his life at risk. Initially, this study proposes a 3D motorised wheelchair model with robotic tools to be used in simulation environments and helps the development and validation of new approaches. This model uses Robotic Operating System (ROS) tools to help the addition of sensors and actuators. With the ROS-Nodes, it is easy to add new features and controllers. The Gazebo framework was used to create the simulation environments. After that, following previous work, it is proposed a wheelchair controller that receives commands from a multimodal interface and can control a real and simulated wheelchair at the same time. This work studies new wheelchair models and their respective controllers in a simulated environment and gradually test in real-world to obtain the final model with low costs and minimise engineering costs.

2020

Vineyard trunk detection using deep learning - An experimental device benchmark

Authors
Pinto de Aguiar, ASP; Neves dos Santos, FBN; Feliz dos Santos, LCF; de Jesus Filipe, VMD; Miranda de Sousa, AJM;

Publication
COMPUTERS AND ELECTRONICS IN AGRICULTURE

Abstract
Research and development in mobile robotics are continuously growing. The ability of a human-made machine to navigate safely in a given environment is a challenging task. In agricultural environments, robot navigation can achieve high levels of complexity due to the harsh conditions that they present. Thus, the presence of a reliable map where the robot can localize itself is crucial, and feature extraction becomes a vital step of the navigation process. In this work, the feature extraction issue in the vineyard context is solved using Deep Learning to detect high-level features - the vine trunks. An experimental performance benchmark between two devices is performed: NVIDIA's Jetson Nano and Google's USB Accelerator. Several models were retrained and deployed on both devices, using a Transfer Learning approach. Specifically, MobileNets, Inception, and lite version of You Only Look Once are used to detect vine trunks in real-time. The models were retrained in a built in-house dataset, that is publicly available. The training dataset contains approximately 1600 annotated vine trunks in 336 different images. Results show that NVIDIA's Jetson Nano provides compatibility with a wider variety of Deep Learning architectures, while Google's USB Accelerator is limited to a unique family of architectures to perform object detection. On the other hand, the Google device showed an overall Average precision higher than Jetson Nano, with a better runtime performance. The best result obtained in this work was an average precision of 52.98% with a runtime performance of 23.14 ms per image, for MobileNet-V2. Recent experiments showed that the detectors are suitable for the use in the Localization and Mapping context.

  • 88
  • 330