Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Interest
Topics
Details

Details

  • Name

    Pedro Nuno
  • Role

    Research Assistant
  • Since

    01st December 2018
001
Publications

2024

Enhancing Underwater Inspection Capabilities: A Learning-Based Approach for Automated Pipeline Visibility Assessment

Authors
Mina, J; Leite, PN; Carvalho, J; Pinho, L; Gonçalves, EP; Pinto, AM;

Publication
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2

Abstract
Underwater scenarios pose additional challenges to perception systems, as the collected imagery from sensors often suffers from limitations that hinder its practical usability. One crucial domain that relies on accurate underwater visibility assessment is underwater pipeline inspection. Manual assessment is impractical and time-consuming, emphasizing the need for automated algorithms. In this study, we focus on developing learning-based approaches to evaluate visibility in underwater environments. We explore various neural network architectures and evaluate them on data collected within real subsea scenarios. Notably, the ResNet18 model outperforms others, achieving a testing accuracy of 93.5% in visibility evaluation. In terms of inference time, the fastest model is MobileNetV3 Small, estimating a prediction within 42.45 ms. These findings represent significant progress in enabling unmanned marine operations and contribute to the advancement of autonomous underwater surveillance systems.

2023

NEREON - An Underwater Dataset for Monocular Depth Estimation

Authors
Dionisio, JMM; Pereira, PNAAS; Leite, PN; Neves, FS; Tavares, JMRS; Pinto, AM;

Publication
OCEANS 2023 - LIMERICK

Abstract
Structures associated with offshore wind energy production require an arduous and cyclical inspection and maintenance (O&M) procedure. Moreover, the harsh challenges introduced by sub-sea phenomena hamper visibility, considerably affecting underwater missions. The lack of quality 3D information within these environments hinders the applicability of autonomous solutions in close-range navigation, fault inspection and intervention tasks since these have a very poor perception of the surrounding space. Deep learning techniques are widely used to solve these challenges in aerial scenarios. The developments in this subject are limited regarding underwater environments due to the lack of publicly disseminated underwater information. This article presents a new underwater dataset: NEREON, containing both 2D and 3D data gathered within real underwater environments at the ATLANTIS Coastal Test Centre. This dataset is adequate for monocular depth estimation tasks, which can provide useful information during O&M missions. With this in mind, a benchmark comparing different deep learning approaches in the literature was conducted and presented along with the NEREON dataset.

2021

A 3-D Lightweight Convolutional Neural Network for Detecting Docking Structures in Cluttered Environments

Authors
Pereira, MI; Leite, PN; Pinto, AM;

Publication
MARINE TECHNOLOGY SOCIETY JOURNAL

Abstract
The maritime industry has been following the paradigm shift toward the automation of typically intelligent procedures, with research regarding autonomous surface vehicles (ASVs) having seen an upward trend in recent years. However, this type of vehicle cannot be employed on a full scale until a few challenges are solved. For example, the docking process of an ASV is still a demanding task that currently requires human intervention. This research work proposes a volumetric convolutional neural network (vCNN) for the detection of docking structures from 3-D data, developed according to a balance between precision and speed. Another contribution of this article is a set of synthetically generated data regarding the context of docking structures. The dataset is composed of LiDAR point clouds, stereo images, GPS, and Inertial Measurement Unit (IMU) information. Several robustness tests carried out with different levels of Gaussian noise demonstrated an average accuracy of 93.34% and a deviation of 5.46% for the worst case. Furthermore, the system was fine-tuned and evaluated in a real commercial harbor, achieving an accuracy of over 96%. The developed classifier is able to detect different types of structures and works faster than other state-of-the-art methods that establish their performance in real environments.

2021

Advancing Autonomous Surface Vehicles: A 3D Perception System for the Recognition and Assessment of Docking-Based Structures

Authors
Pereira, MI; Claro, RM; Leite, PN; Pinto, AM;

Publication
IEEE ACCESS

Abstract
The automation of typically intelligent and decision-making processes in the maritime industry leads to fewer accidents and more cost-effective operations. However, there are still lots of challenges to solve until fully autonomous systems can be employed. Artificial Intelligence (AI) has played a major role in this paradigm shift and shows great potential for solving some of these challenges, such as the docking process of an autonomous vessel. This work proposes a lightweight volumetric Convolutional Neural Network (vCNN) capable of recognizing different docking-based structures using 3D data in real-time. A synthetic-to-real domain adaptation approach is also proposed to accelerate the training process of the vCNN. This approach makes it possible to greatly decrease the cost of data acquisition and the need for advanced computational resources. Extensive experiments demonstrate an accuracy of over 90% in the recognition of different docking structures, using low resolution sensors. The inference time of the system was about 120ms on average. Results obtained using a real Autonomous Surface Vehicle (ASV) demonstrated that the vCNN trained with the synthetic-to-real domain adaptation approach is suitable for maritime mobile robots. This novel AI recognition method, combined with the utilization of 3D data, contributes to an increased robustness of the docking process regarding environmental constraints, such as rain and fog, as well as insufficient lighting in nighttime operations.

2021

Exploiting Motion Perception in Depth Estimation Through a Lightweight Convolutional Neural Network

Authors
Leite, PN; Pinto, AM;

Publication
IEEE ACCESS

Abstract
Understanding the surrounding 3D scene is of the utmost importance for many robotic applications. The rapid evolution of machine learning techniques has enabled impressive results when depth is extracted from a single image. High-latency networks are required to achieve these performances, rendering them unusable for time-constrained applications. This article introduces a lightweight Convolutional Neural Network (CNN) for depth estimation, NEON, designed for balancing both accuracy and inference times. Instead of solely focusing on visual features, the proposed methodology exploits the Motion-Parallax effect to combine the apparent motion of pixels with texture. This research demonstrates that motion perception provides crucial insight about the magnitude of movement for each pixel, which also encodes cues about depth since large displacements usually occur when objects are closer to the imaging sensor. NEON's performance is compared to relevant networks in terms of Root Mean Squared Error (RMSE), the percentage of correctly predicted pixels (delta(1)) and inference times, using the KITTI dataset. Experiments prove that NEON is significantly more efficient than the current top ranked network, estimating predictions 12 times faster; while achieving an average RMSE of 3.118 m and a delta(1) of 94.5%. Ablation studies demonstrate the relevance of tailoring the network to use motion perception principles in estimating depth from image sequences, considering that the effectiveness and quality of the estimated depth map is similar to more computational demanding state-of-the-art networks. Therefore, this research proposes a network that can be integrated in robotic applications, where computational resources and processing-times are important constraints, enabling tasks such as obstacle avoidance, object recognition and robotic grasping.