Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Filipe Neves Santos

2019

Monocular Visual Odometry Using Fisheye Lens Cameras

Autores
Aguiar, A; dos Santos, FN; Santos, L; Sousa, A;

Publicação
Progress in Artificial Intelligence, 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, September 3-6, 2019, Proceedings, Part II.

Abstract
Developing ground robots for crop monitoring and harvesting in steep slope vineyards is a complex challenge due to two main reasons: harsh condition of the terrain and unstable localization accuracy obtained with Global Navigation Satellite System. In this context, a reliable localization system requires an accurate and redundant information to Global Navigation Satellite System and wheel odometry based system. To pursue this goal and have a reliable localization system in our robotic platform we aim to extract the better performance as possible from a monocular Visual Odometry method. To do so, we present a benchmark of Libviso2 using both perspective and fisheye lens cameras, studying the behavior of the method using both topologies in terms of motion performance in an outdoor environment. Also we analyze the quality of feature extraction of the method using the two camera systems studying the impact of the field of view and omnidirectional image rectification in VO. We propose a general methodology to incorporate a fisheye lens camera system into a VO method. Finally, we briefly describe the robot setup that was used to generate the results that will be presented. © 2019, Springer Nature Switzerland AG.

2019

FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware

Autores
Aguiar, A; Santos, F; Sousa, AJ; Santos, L;

Publicação
APPLIED SCIENCES-BASEL

Abstract
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detection and ranging (LIDAR), that is optimized to run in a low cost graphical processing unit (GPU). This solution, named FAST-FUSION, proposes to the scientific community three core contributions. The first contribution is an extension to the state-of-the-art monocular visual odometry (Libviso2) to work with omnidirectional cameras and single axis gyro to increase the system accuracy. The second contribution, it is an algorithm that considers low cost LIDAR data to estimate the motion scale and solve the limitations of monocular visual odometer systems. Finally, we propose an heterogeneous computing optimization that considers a Raspberry Pi GPU to improve the visual odometry runtime performance in low cost platforms. To test and evaluate FAST-FUSION, we created three open-source datasets in an outdoor environment. Results shows that FAST-FUSION is acceptable to run in real-time in low cost hardware and that outperforms the original Libviso2 approach in terms of time performance and motion estimation accuracy.

2020

Deep Learning Applications in Agriculture: A Short Review

Autores
Santos, L; Santos, FN; Oliveira, PM; Shinde, P;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
Deep learning (DL) incorporates a modern technique for image processing and big data analysis with large potential. Deep learning is a recent tool in the agricultural domain, being already successfully applied to other domains. This article performs a survey of different deep learning techniques applied to various agricultural problems, such as disease detection/identification, fruit/plants classification and fruit counting among other domains. The paper analyses the specific employed models, the source of the data, the performance of each study, the employed hardware and the possibility of real-time application to study eventual integration with autonomous robotic platforms. The conclusions indicate that deep learning provides high accuracy results, surpassing, with occasional exceptions, alternative traditional image processing techniques in terms of accuracy.

2020

Forest Robot and Datasets for Biomass Collection

Autores
Reis, R; dos Santos, FN; Santos, L;

Publicação
FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 1

Abstract
Portugal has witnessed some of its largest wildfires in the last decade, due to the lack of forestry management and valuation strategies. A cost-effective biomass collection tool/approach can increase the forest valuing, being a tool to reduce fire risk in the forest. However, cost-effective forestry machinery/solutions are needed to harvest this biomass. Most of bigger operations in forests are already highly mechanized, but not the smaller operations. Mobile robotics know-how combined with new virtual reality and remote sensing techniques paved the way for a new robotics perspective regarding work machines in the forest. Navigation is still a challenge in a forest. There is a lot of information, trees consist of obstacles while lower vegetation may hide danger for robot trajectory, and the terrain in our region is mostly steep. The existence of accurate information about the environment is crucial for the navigation process and for biomass inventory. This paper presents a prototype forest robot for biomass collection. Besides, it is provided a dataset of different forest environments, containing data from different sensors such as 3D laser data, thermal camera, inertial units, GNSS, and RGB camera. These datasets are meant to provide information for the study of the forest terrain, allowing further development and research of navigation planning, biomass analysis, task planning, and information that professionals of this field may require.

2020

Path Planning Aware of Robot's Center of Mass for Steep Slope Vineyards

Autores
Santos, L; Santos, F; Mendes, J; Costa, P; Lima, J; Reis, R; Shinde, P;

Publicação
ROBOTICA

Abstract
Steep slope vineyards are a complex scenario for the development of ground robots. Planning a safe robot trajectory is one of the biggest challenges in this scenario, characterized by irregular surfaces and strong slopes (more than 35 degrees). Moving the robot through a pile of stones, spots with high slope or/and with wrong robot yaw may result in an abrupt fall of the robot, damaging the equipment and centenary vines, and sometimes imposing injuries to humans. This paper presents a novel approach for path planning aware of center of mass of the robot for application in sloppy terrains. Agricultural robotic path planning (AgRobPP) is a framework that considers the A* algorithm by expanding inner functions to deal with three main inputs: multi-layer occupation grid map, altitude map and robot's center of mass. This multi-layer grid map is updated by obstacles taking into account the terrain slope and maximum robot posture. AgRobPP is also extended with algorithms for local trajectory replanning during the execution of a trajectory that is blocked by the presence of an obstacle, always assuring the safety of the re-planned path. AgRobPP has a novel PointCloud translator algorithm called PointCloud to grid map and digital elevation model (PC2GD), which extracts the occupation grid map and digital elevation model from a PointCloud. This can be used in AgRobPP core algorithms and farm management intelligent systems as well. AgRobPP algorithms demonstrate a great performance with the real data acquired from AgRob V16, a robotic platform developed for autonomous navigation in steep slope vineyards.

2020

Visual Trunk Detection Using Transfer Learning and a Deep Learning-Based Coprocessor

Autores
Aguiar, AS; Dos Santos, FN; Miranda De Sousa, AJM; Oliveira, PM; Santos, LC;

Publicação
IEEE ACCESS

Abstract
Agricultural robotics is nowadays a complex, challenging, and exciting research topic. Some agricultural environments present harsh conditions to robotics operability. In the case of steep slope vineyards, there are several challenges: terrain irregularities, characteristics of illumination, and inaccuracy/unavailability of signals emitted by the Global Navigation Satellite System (GNSS). Under these conditions, robotics navigation becomes a challenging task. To perform these tasks safely and accurately, the extraction of reliable features or landmarks from the surrounding environment is crucial. This work intends to solve this issue, performing accurate, cheap, and fast landmark extraction in steep slope vineyard context. To do so, we used a single camera and an Edge Tensor Processing Unit (TPU) provided by Google & x2019;s USB Accelerator as a small, high-performance, and low power unit suitable for image classification, object detection, and semantic segmentation. The proposed approach performs object detection using Deep Learning (DL)-based Neural Network (NN) models on this device to detect vine trunks. To train the models, Transfer Learning (TL) is used on several pre-trained versions of MobileNet V1 and MobileNet V2. A benchmark between the two models and the different pre-trained versions is performed. The models are pre-trained in a built in-house dataset, that is publicly available containing 336 different images with approximately 1,600 annotated vine trunks. There are considered two vineyards, one using camera images with the conventional infrared filter and others with an infrablue filter. Results show that this configuration allows a fast vine trunk detection, with MobileNet V2 being the most accurate retrained detector, achieving an overall Average Precision of 52.98 & x0025;. We briefly compare the proposed approach with the state-of-the-art Tiny YOLO-V3 running on Jetson TX2, showing the outperformance of the adopted system in this work. Additionally, it is also shown that the proposed detectors are suitable for the Localization and Mapping problems.

  • 6
  • 22