Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Armando Sousa

2020

A Version of Libviso2 for Central Dioptric Omnidirectional Cameras with a Laser-Based Scale Calculation

Autores
Aguiar, A; Santos, F; Santos, L; Sousa, A;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
Monocular Visual Odometry techniques represent a challenging and appealing research area in robotics navigation field. The use of a single camera to track robot motion is a hardware-cheap solution. In this context, there are few Visual Odometry methods on the literature that estimate robot pose accurately using a single camera without any other source of information. The use of omnidirectional cameras in this field is still not consensual. Many works show that for outdoor environments the use of them does represent an improvement compared with the use of conventional perspective cameras. Besides that, in this work we propose an open-source monocular omnidirectional version of the state-of-the-art method Libviso2 that outperforms the original one even in outdoor scenes. This approach is suitable for central dioptric omnidirectional cameras and takes advantage of their wider field of view to calculate the robot motion with a really positive performance on the context of monocular Visual Odometry. We also propose a novel approach to calculate the scale factor that uses matches between laser measures and 3-D triangulated feature points to do so. The novelty of this work consists in the association of the laser ranges with the features on the omnidirectional image. Results were generate using three open-source datasets built in-house showing that our unified system largely outperforms the original monocular version of Libviso2.

2019

Monocular Visual Odometry Using Fisheye Lens Cameras

Autores
Aguiar, A; dos Santos, FN; Santos, L; Sousa, A;

Publicação
Progress in Artificial Intelligence, 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, September 3-6, 2019, Proceedings, Part II.

Abstract
Developing ground robots for crop monitoring and harvesting in steep slope vineyards is a complex challenge due to two main reasons: harsh condition of the terrain and unstable localization accuracy obtained with Global Navigation Satellite System. In this context, a reliable localization system requires an accurate and redundant information to Global Navigation Satellite System and wheel odometry based system. To pursue this goal and have a reliable localization system in our robotic platform we aim to extract the better performance as possible from a monocular Visual Odometry method. To do so, we present a benchmark of Libviso2 using both perspective and fisheye lens cameras, studying the behavior of the method using both topologies in terms of motion performance in an outdoor environment. Also we analyze the quality of feature extraction of the method using the two camera systems studying the impact of the field of view and omnidirectional image rectification in VO. We propose a general methodology to incorporate a fisheye lens camera system into a VO method. Finally, we briefly describe the robot setup that was used to generate the results that will be presented. © 2019, Springer Nature Switzerland AG.

2020

Web Based Robotic Simulator for Tactode Tangible Block Programming System

Autores
Alves, M; Sousa, A; Cardoso, A;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
Nowadays, with the increase of technology, it is important to adapt children and their education to this development. This article proposes programming blocks for young students to learn concepts related to math and technology in an easy and funny way, using a Web Application and a robot. The students can build a puzzle, with tangible tiles, giving instructions for the robot execute. Then, it is possible to take a photograph of the puzzle and upload it on the application. This photograph is processed and converted in executable code for the robot that can be simulated in the app by the virtual robot or performed in the real robot.

2020

Smart Data Visualisation as a Stepping Stone for Industry 4.0-a Case Study in Investment Casting Industry

Autores
Cruz, AB; Sousa, A; Cardoso, A; Valente, B; Reis, A;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
With present day industries pressing for retrofitting of current machinery into Industry 4.0 ideas, a large effort is put into data production, storage and analysis. To be able to use such data, it is fundamental to create intelligent software for analysis and visualisation of a growing but frequently faulty amount of data, without the quality and quantity adequate for full blown data mining techniques. This article case studies a foundry company that uses the lost wax method to produce metal parts. As retrofitting is underway, modelling, simulation and smart data visualisation are proposed as methods to overcome data shortage in quantity and quality. The developed data visualisation system is demonstrated to be adapted to the requirements and needs of this company in order to approach full automation ideas. Such data visualisation system allow workers and supervisors to know in real time what is happening in the factory, or study the passage of manufacturing orders for a specific area. Data Analysts can also predict machinery problems, correct issues with slow changing deviations and gather additional knowledge on the implementation of the process itself.

2020

Applying Software Static Analysis to ROS: The Case Study of the FASTEN European Project

Autores
Neto, T; Arrais, R; Sousa, A; Santos, A; Veiga, G;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
Modern industry is shifting towards flexible, advanced robotic systems to meet the increasing demand for custom-made products with low manufacturing costs and to promote a collaborative environment for humans and robots. As a consequence of this industrial revolution, some traditional, mechanical- and hardware-based safety mechanisms are discarded in favour of a safer, more dependable robot software. This work presents a case study of assessing and improving the internal quality of a European research mobile manipulator, operating in a real industrial environment, using modern static analysis tools geared for robotic software. Following an iterative approach, we managed to fix about 90% of the reported issues, resulting in code that is easier to use and maintain.

2020

BulbRobot - Inexpensive Open Hardware and Software Robot Featuring Catadioptric Vision and Virtual Sonars

Autores
Ferreira, J; Coelho, F; Sousa, A; Reis, LP;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
This article proposes a feature-rich, open hardware, open software inexpensive robot based on a Waveshare AlphaBot 2. The proposal uses a Raspberry Pi and a chrome plated light bulb as a mirror to produce a robot with an omnidirectional vision (catadioptric) system. The system also tackles boot and network issues to allow for monitor-less programming and usage, thus further reducing usage costs. The OpenCV library is used for image processing and obstacles are identified based on their brightness and saturation in contrast to the ground. Our solution achieved acceptable framerates and near perfect object detection up to 1.5-m distances. The robot is usable for simple robotic demonstrations and educational purposes for its simplicity and flexibility.

  • 8
  • 21