Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRAS

2023

Methodological insights from unmanned system technologies in a rock quarry environment and geomining heritage site: coupling LiDAR-based mapping and GIS geovisualisation techniques

Authors
Pires, A; Dias, A; Silva, P; Ferreira, A; Rodrigues, P; Santos, T; Oliveira, A; Freitas, L; Martins, A; Almeida, J; Silva, E; Chaminé, HI;

Publication
Arabian Journal of Geosciences

Abstract

2023

ArTuga: A novel multimodal fiducial marker for aerial robotics

Authors
Claro, RM; Silva, DB; Pinto, AM;

Publication
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
For Vertical Take-Off and Landing Unmanned Aerial Vehicles (VTOL UAVs) to operate autonomously and effectively, it is mandatory to endow them with precise landing abilities. The UAV has to be able to detect the landing target and to perform the landing maneuver without compromising its own safety and the integrity of its surroundings. However, current UAVs do not present the required robustness and reliability for precise landing in highly demanding scenarios, particularly due to their inadequacy to perform accordingly under challenging lighting and weather conditions, including in day and night operations.This work proposes a multimodal fiducial marker, named ArTuga (Augmented Reality Tag for Unmanned vision-Guided Aircraft), capable of being detected by an heterogeneous perception system for accurate and precise landing in challenging environments and daylight conditions. This research combines photometric and radiometric information by proposing a real-time multimodal fusion technique that ensures a robust and reliable detection of the landing target in severe environments.Experimental results using a real multicopter UAV show that the system was able to detect the proposed marker in adverse conditions (such as at different heights, with intense sunlight and in dark environments). The obtained average accuracy for position estimation at 1 m height was of 0.0060 m with a standard deviation of 0.0003 m. Precise landing tests obtained an average deviation of 0.027 m from the proposed marker, with a standard deviation of 0.026 m. These results demonstrate the relevance of the proposed system for the precise landing in adverse conditions, such as in day and night operations with harsh weather conditions.(c) 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

2023

End-to-End Detection of a Landing Platform for Offshore UAVs Based on a Multimodal Early Fusion Approach

Authors
Neves, FS; Claro, RM; Pinto, AM;

Publication
SENSORS

Abstract
A perception module is a vital component of a modern robotic system. Vision, radar, thermal, and LiDAR are the most common choices of sensors for environmental awareness. Relying on singular sources of information is prone to be affected by specific environmental conditions (e.g., visual cameras are affected by glary or dark environments). Thus, relying on different sensors is an essential step to introduce robustness against various environmental conditions. Hence, a perception system with sensor fusion capabilities produces the desired redundant and reliable awareness critical for real-world systems. This paper proposes a novel early fusion module that is reliable against individual cases of sensor failure when detecting an offshore maritime platform for UAV landing. The model explores the early fusion of a still unexplored combination of visual, infrared, and LiDAR modalities. The contribution is described by suggesting a simple methodology that intends to facilitate the training and inference of a lightweight state-of-the-art object detector. The early fusion based detector achieves solid detection recalls up to 99% for all cases of sensor failure and extreme weather conditions such as glary, dark, and foggy scenarios in fair real-time inference duration below 6 ms.

2023

Energy Efficient Path Planning for 3D Aerial Inspections

Authors
Claro, RM; Pereira, MI; Neves, FS; Pinto, AM;

Publication
IEEE ACCESS

Abstract
The use of Unmanned Aerial Vehicles (UAVs) in different inspection tasks is increasing. This technology reduces inspection costs and collects high quality data of distinct structures, including areas that are not easily accessible by human operators. However, the reduced energy available on the UAVs limits their flight endurance. To increase the autonomy of a single flight, it is important to optimize the path to be performed by the UAV, in terms of energy loss. Therefore, this work presents a novel formulation of the Travelling Salesman Problem (TSP) and a path planning algorithm that uses a UAV energy model to solve this optimization problem. The novel TSP formulation is defined as Asymmetric Travelling Salesman Problem with Precedence Loss (ATSP-PL), where the cost of moving the UAV depends on the previous position. The energy model relates each UAV movement with its energy consumption, while the path planning algorithm is focused on minimizing the energy loss of the UAV, ensuring that the structure is fully covered. The developed algorithm was tested in both simulated and real scenarios. The simulated experiments were performed with realistic models of wind turbines and a UAV, whereas the real experiments were performed with a real UAV and an illumination tower. The inspection paths generated presented improvements over 24% and 8%, when compared with other methods, for the simulated and real experiments, respectively, optimizing the energy consumption of the UAV.

2023

Decoding Reinforcement Learning for Newcomers

Authors
Neves, FS; Andrade, GA; Reis, MF; Aguiar, AP; Pinto, AM;

Publication
IEEE ACCESS

Abstract
The Reinforcement Learning (RL) paradigm is showing promising results as a generic purpose framework for solving decision-making problems (e.g., robotics, games, finance). The aim of this work is to reduce the learning barriers and inspire young students, researchers and educators to use RL as an obvious tool to solve robotics problems. This paper provides an intelligible step-by-step RL problem formulation and the availability of an easy-to-use interactive simulator for students at various levels (e.g., undergraduate, bachelor, master, doctorate), researchers and educators. The interactive tool facilitates the familiarization with the key concepts of RL, its problem formulation and implementation. In this work, RL is used for solving a robotics 2D navigational problem where the robot needs to avoid collisions with obstacles while aiming to reach a goal point. A navigational problem is simple and convenient for educational purposes, since the outcome is unambiguous (e.g., the goal is reached or not, a collision happened or not). Due to a lack of open-source graphical interactive simulators concerning the field of RL, this paper combines theoretical exposition with an accessible practical tool to facilitate the apprehension. The results demonstrated are produced by a Python script that is released as open-source to reduce the learning barriers in such innovative research topic in robotics.

2023

NEREON - An Underwater Dataset for Monocular Depth Estimation

Authors
Dionisio, JMM; Pereira, PNAAS; Leite, PN; Neves, FS; Tavares, JMRS; Pinto, AM;

Publication
OCEANS 2023 - LIMERICK

Abstract
Structures associated with offshore wind energy production require an arduous and cyclical inspection and maintenance (O&M) procedure. Moreover, the harsh challenges introduced by sub-sea phenomena hamper visibility, considerably affecting underwater missions. The lack of quality 3D information within these environments hinders the applicability of autonomous solutions in close-range navigation, fault inspection and intervention tasks since these have a very poor perception of the surrounding space. Deep learning techniques are widely used to solve these challenges in aerial scenarios. The developments in this subject are limited regarding underwater environments due to the lack of publicly disseminated underwater information. This article presents a new underwater dataset: NEREON, containing both 2D and 3D data gathered within real underwater environments at the ATLANTIS Coastal Test Centre. This dataset is adequate for monocular depth estimation tasks, which can provide useful information during O&M missions. With this in mind, a benchmark comparing different deep learning approaches in the literature was conducted and presented along with the NEREON dataset.

  • 7
  • 167