Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CRAS

2021

Exploiting Motion Perception in Depth Estimation Through a Lightweight Convolutional Neural Network

Autores
Leite, PN; Pinto, AM;

Publicação
IEEE ACCESS

Abstract
Understanding the surrounding 3D scene is of the utmost importance for many robotic applications. The rapid evolution of machine learning techniques has enabled impressive results when depth is extracted from a single image. High-latency networks are required to achieve these performances, rendering them unusable for time-constrained applications. This article introduces a lightweight Convolutional Neural Network (CNN) for depth estimation, NEON, designed for balancing both accuracy and inference times. Instead of solely focusing on visual features, the proposed methodology exploits the Motion-Parallax effect to combine the apparent motion of pixels with texture. This research demonstrates that motion perception provides crucial insight about the magnitude of movement for each pixel, which also encodes cues about depth since large displacements usually occur when objects are closer to the imaging sensor. NEON's performance is compared to relevant networks in terms of Root Mean Squared Error (RMSE), the percentage of correctly predicted pixels (delta(1)) and inference times, using the KITTI dataset. Experiments prove that NEON is significantly more efficient than the current top ranked network, estimating predictions 12 times faster; while achieving an average RMSE of 3.118 m and a delta(1) of 94.5%. Ablation studies demonstrate the relevance of tailoring the network to use motion perception principles in estimating depth from image sequences, considering that the effectiveness and quality of the estimated depth map is similar to more computational demanding state-of-the-art networks. Therefore, this research proposes a network that can be integrated in robotic applications, where computational resources and processing-times are important constraints, enabling tasks such as obstacle avoidance, object recognition and robotic grasping.

2021

Multiple Vessel Detection and Tracking in Harsh Maritime Environments

Autores
Duarte D.F.; Pereira M.I.; Pinto A.M.;

Publicação
Oceans Conference Record (IEEE)

Abstract
Recently, research concerning the navigation of Autonomous Surface Vehicles (ASVs) has been increasing. However, a big scale implementation of these vessels is still held back by a plethora of challenges such as multi-object tracking. This article presents the development of a tracking model through transfer learning techniques, based on referenced object trackers for urban scenarios. The work consisted in training a neural network through deep learning techniques, including data association and comparison of three different optimisers, Adadelta, Adam and SGD, determining the best hyper-parameters to maximise the training efficiency. The developed model achieved decent performance at tracking large vessels in the ocean, being successful even in harsh lighting conditions and lack of image focus.

2021

Autonomous High-Resolution Image Acquisition System for Plankton

Autores
Resende, J; Barbosa, P; Almeida, J; Martins, A;

Publicação
2021 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)

Abstract
This paper presents a high-resolution imaging system developed for plankton imaging in the context of the MarinEye integrated biological sensor [1]. This sensor aims to produce an autonomous system for marine integrated physical, chemical and biological monitoring combining imaging, acoustic, sonar, and fraction filtration systems (coupled to DNA/RNA preservation) as well as sensors for targeting physical-chemical variables in a modular and compact system that can be deployed on fixed and mobile platforms, such as the TURTLE robotic deep sea lander [2]. The results obtained with the system both in laboratory conditions and in the field are presented and discussed, allowing the characterization and validation of the performance of the Autonomous High-Resolution Image Acquisition System for Plankton.

2021

Emergency Landing Spot Detection Algorithm for Unmanned Aerial Vehicles

Autores
Loureiro, G; Dias, A; Martins, A; Almeida, J;

Publicação
REMOTE SENSING

Abstract
The use and research of Unmanned Aerial Vehicle (UAV) have been increasing over the years due to the applicability in several operations such as search and rescue, delivery, surveillance, and others. Considering the increased presence of these vehicles in the airspace, it becomes necessary to reflect on the safety issues or failures that the UAVs may have and the appropriate action. Moreover, in many missions, the vehicle will not return to its original location. If it fails to arrive at the landing spot, it needs to have the onboard capability to estimate the best area to safely land. This paper addresses the scenario of detecting a safe landing spot during operation. The algorithm classifies the incoming Light Detection and Ranging (LiDAR) data and store the location of suitable areas. The developed method analyses geometric features on point cloud data and detects potential right spots. The algorithm uses the Principal Component Analysis (PCA) to find planes in point cloud clusters. The areas that have a slope less than a threshold are considered potential landing spots. These spots are evaluated regarding ground and vehicle conditions such as the distance to the UAV, the presence of obstacles, the area's roughness, and the spot's slope. Finally, the output of the algorithm is the optimum spot to land and can vary during operation. The proposed approach evaluates the algorithm in simulated scenarios and an experimental dataset presenting suitability to be applied in real-time operations.

2021

LiDAR-based Power Assets Extraction based on Point Cloud Data

Autores
Amado, M; Lopes, F; Dias, A; Martins, A;

Publicação
IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2021, Santa Maria da Feira, Portugal, April 28-29, 2021

Abstract

2021

Graph-SLAM Approach for Indoor UAV Localization in Warehouse Logistics Applications

Autores
Moura, A; Antunes, J; Dias, A; Martins, A; Almeida, J;

Publicação
2021 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)

Abstract
Unmanned Aerial Vehicles (UAVs) are a key ingredient in the industry and in warehouse logistics digital transformation process, providing the ability to perform automatic cyclic counting and real-time inventory, localize hard-to-find items and reach narrow storage areas. The use of UAVs poses new challenges, such as indoor autonomous localization and navigation, collision avoidance and automated UAV fleet management. This paper addresses the development of a vision-based Graph-SLAM approach for UAV indoor localization without predefined warehouse markers positions. A framework is proposed and developed to support different commercial UAV platforms, allowing the estimation in real-time of the UAV position and attitude. Indoor experimental tests were carried out in order to evaluate the performance of the developed method, comparing the results obtained with an approach based on the pre-mapped markers position indoor localization method.

  • 27
  • 167