Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRIIS

2019

Modeling of video projectors in OpenGL for implementing a spatial augmented reality teaching system for assembly operations

Authors
Costa, CM; Veiga, G; Sousa, A; Rocha, L; Augusto Sousa, AA; Rodrigues, R; Thomas, U;

Publication
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)

Abstract
Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator.

2019

Monocular Visual Odometry Benchmarking and Turn Performance Optimization

Authors
Aguiar, A; Sousa, A; dos Santos, FN; Oliveira, M;

Publication
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)

Abstract
Developing ground robots for crop monitoring and harvesting in steep slope vineyards is a complex challenge due to two main reasons: harsh condition of the terrain and unstable localization accuracy obtained with Global Navigation Satellite System. In this context, a reliable localization system requires an accurate and redundant information to Global Navigation Satellite System and wheel odometry based system. To pursue this goal we benchmark 3 well known Visual Odometry methods with 2 datasets. Two of these are feature-based Visual Odometry algorithms: Libviso2 and SVO 2.0. The third is an appearance-based Visual Odometry algorithm called DSO. In monocular Visual Odometry, two main problems appear: pure rotations and scale estimation. In this paper, we focus on the first issue. To do so, we propose a Kalman Filter to fuse a single gyroscope with the output pose of monocular Visual Odometry, while estimating gyroscope's bias continuously. In this approach we propose a non-linear noise variation that ensures that bias estimation is not affected by Visual Odometry resultant rotations. We compare and discuss the three unchanged methods and the three methods with the proposed additional Kalman Filter. For tests, two public datasets are used: the Kitti dataset and another built in-house. Results show that our additional Kalman Filter highly improves Visual Odometry performance in rotation movements.

2019

Learning low level skills from scratch for humanoid robot soccer using deep reinforcement learning

Authors
Abreu, M; Lau, N; Sousa, A; Reis, LP;

Publication
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)

Abstract
Reinforcement learning algorithms are now more appealing than ever. Recent approaches bring power and tuning simplicity to the everyday work machine. The possibilities are endless, and the idea of automating learning without domain knowledge is quite tempting for many researchers. However, in competitive environments such as the RoboCup 3D Soccer Simulation League, there is a lot to be done regarding humanlike behaviors. Current teams use many mechanical movements to perform basic skills, such as running and dribbling the ball. This paper aims to use the PPO algorithm to optimize those skills, achieving natural gaits without sacrificing performance. We use Simspark to simulate a NAO humanoid robot, using visual and body sensors to control its actuators. Based on our results, we propose an indirect control approach and detailed parameter setups to obtain natural running and dribbling behaviors. The obtained performance is in some cases comparable or better than the top RoboCup teams. However, some skills are not ready to be applied in competitive environments yet, due to instability. This work contributes towards the improvement of RoboCup and some related technical challenges.

2019

Monocular Visual Odometry Using Fisheye Lens Cameras

Authors
Aguiar, A; dos Santos, FN; Santos, L; Sousa, A;

Publication
Progress in Artificial Intelligence, 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, September 3-6, 2019, Proceedings, Part II.

Abstract
Developing ground robots for crop monitoring and harvesting in steep slope vineyards is a complex challenge due to two main reasons: harsh condition of the terrain and unstable localization accuracy obtained with Global Navigation Satellite System. In this context, a reliable localization system requires an accurate and redundant information to Global Navigation Satellite System and wheel odometry based system. To pursue this goal and have a reliable localization system in our robotic platform we aim to extract the better performance as possible from a monocular Visual Odometry method. To do so, we present a benchmark of Libviso2 using both perspective and fisheye lens cameras, studying the behavior of the method using both topologies in terms of motion performance in an outdoor environment. Also we analyze the quality of feature extraction of the method using the two camera systems studying the impact of the field of view and omnidirectional image rectification in VO. We propose a general methodology to incorporate a fisheye lens camera system into a VO method. Finally, we briefly describe the robot setup that was used to generate the results that will be presented. © 2019, Springer Nature Switzerland AG.

2019

FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware

Authors
Aguiar, A; Santos, F; Sousa, AJ; Santos, L;

Publication
APPLIED SCIENCES-BASEL

Abstract
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detection and ranging (LIDAR), that is optimized to run in a low cost graphical processing unit (GPU). This solution, named FAST-FUSION, proposes to the scientific community three core contributions. The first contribution is an extension to the state-of-the-art monocular visual odometry (Libviso2) to work with omnidirectional cameras and single axis gyro to increase the system accuracy. The second contribution, it is an algorithm that considers low cost LIDAR data to estimate the motion scale and solve the limitations of monocular visual odometer systems. Finally, we propose an heterogeneous computing optimization that considers a Raspberry Pi GPU to improve the visual odometry runtime performance in low cost platforms. To test and evaluate FAST-FUSION, we created three open-source datasets in an outdoor environment. Results shows that FAST-FUSION is acceptable to run in real-time in low cost hardware and that outperforms the original Libviso2 approach in terms of time performance and motion estimation accuracy.

2019

Prototyping and Programming a Multipurpose Educational Mobile Robot - NaSSIE

Authors
Pinto, VH; Monteiro, JM; Gonçalves, J; Costa, P;

Publication
Advances in Intelligent Systems and Computing

Abstract
NaSSIE - Navigation and Sensoring Skills in Engineering is a platform developed with the intent of facilitating the acquisition of some skills by Engineering Students, which is a core part of the process of controlling a mobile robot. In this paper, the chosen hardware and consequent physical construction of the prototype as well as vehicle’s associated software will be presented. As a use case, this platform was tested during the Robotic Day 2017 in Czech Republic. Preliminary results will also be presented of this year’s preparation for the Micromouse competition. © 2019, Springer Nature Switzerland AG.

  • 113
  • 330