Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Andry Maykol Pinto concluiu o Programa de Doutoramento em Engenharia Electrotécnica e de Computadores com tese relacionada com Robótica, pela Faculdade de Engenharia da Universidade do Porto, em 2014. Na mesma instituição, obteve o Mestrado em Engenharia Electrotécnica e de Computadores em 2010. Actualmente, trabalha como Investigador Sénior no Centro de Robótica e Sistemas Autónomos do INESC TEC e como Professor Auxiliar na Faculdade de Engenharia da Universidade do Porto.


Ele é o Investigador Principal de muitos projetos de investigação relacionados com soluções robóticas para O&M, e financiados por fundos nacionais e europeus. Lidera uma equipa com mais de 15 investigadores e coordena um projeto ICT/H2020 na área da robótica marítima e a sua investigação tem inúmeras publicações nas revistas de maior impacto em áreas relacionadas com visão computacional, robótica móvel, sistemas autónomos, percepção multidimensional, fusão de sensores e visão subaquática.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Andry Maykol Pinto
  • Cargo

    Investigador Sénior
  • Desde

    01 fevereiro 2011
  • Nacionalidade

    Portugal
  • Contactos

    +351228340554
    andry.m.pinto@inesctec.pt
007
Publicações

2024

Fusing heterogeneous tri-dimensional information for reconstructing submerged structures in harsh sub-sea environments

Autores
Leite, PN; Pinto, AM;

Publicação
INFORMATION FUSION

Abstract
Exploiting stronger winds at offshore farms leads to a cyclical need for maintenance due to the harsh maritime conditions. While autonomous vehicles are the prone solution for O&M procedures, sub-sea phenomena induce severe data degradation that hinders the vessel's 3D perception. This article demonstrates a hybrid underwater imaging system that is capable of retrieving tri-dimensional information: dense and textured Photogrammetric Stereo (PS) point clouds and multiple accurate sets of points through Light Stripe Ranging (LSR), that are combined into a single dense and accurate representation. Two novel fusion algorithms are introduced in this manuscript. A Joint Masked Regression (JMR) methodology propagates sparse LSR information towards the PS point cloud, exploiting homogeneous regions around each beam projection. Regression curves then correlate depth readings from both inputs to correct the stereo-based information. On the other hand, the learning-based solution (RHEA) follows an early-fusion approach where features are conjointly learned from a coupled representation of both 3D inputs. A synthetic-to-real training scheme is employed to bypass domain-adaptation stages, enabling direct deployment in underwater contexts. Evaluation is conducted through extensive trials in simulation, controlled underwater environments, and within a real application at the ATLANTIS Coastal Testbed. Both methods estimate improved output point clouds, with RHEA achieving an average RMSE of 0.0097 m -a 52.45% improvement when compared to the PS input. Performance with real underwater information proves that RHEA is robust in dealing with degraded input information; JMR is more affected by missing information, excelling when the LSR data provides a complete representation of the scenario, and struggling otherwise.

2024

Reinforcement learning based robot navigation using illegal actions for autonomous docking of surface vehicles in unknown environments

Autores
Pereira, MI; Pinto, AM;

Publicação
Engineering Applications of Artificial Intelligence

Abstract
Autonomous Surface Vehicles (ASVs) are bound to play a fundamental role in the maintenance of offshore wind farms. Robust navigation for inspection vehicles should take into account the operation of docking within a harbouring structure, which is a critical and still unexplored maneuver. This work proposes an end-to-end docking approach for ASVs, based on Reinforcement Learning (RL), which teaches an agent to tackle collision-free navigation towards a target pose that allows the berthing of the vessel. The developed research presents a methodology that introduces the concept of illegal actions to facilitate the vessel's exploration during the learning process. This method improves the adopted Actor-Critic (AC) framework by accelerating the agent's optimization by approximately 38.02%. A set of comprehensive experiments demonstrate the accuracy and robustness of the presented method in scenarios with simulated environmental constraints (Beaufort Scale and Douglas Sea Scale), and a diversity of docking structures. Validation with two different real ASVs in both controlled and real environments demonstrates the ability of this method to enable safe docking maneuvers without prior knowledge of the scenario. © 2024 The Author(s)

2024

Nautilus: An autonomous surface vehicle with a multilayer software architecture for offshore inspection

Autores
Campos, DF; Goncalves, EP; Campos, HJ; Pereira, MI; Pinto, AM;

Publicação
JOURNAL OF FIELD ROBOTICS

Abstract
The increasing adoption of robotic solutions for inspection tasks in challenging environments is becoming increasingly prevalent, particularly in the offshore wind energy industry. This trend is driven by the critical need to safeguard the integrity and operational efficiency of offshore infrastructure. Consequently, the design of inspection vehicles must comply with rigorous requirements established by the offshore Operation and Maintenance (O&M) industry. This work presents the design of an autonomous surface vehicle (ASV), named Nautilus, specifically tailored to withstand the demanding conditions of offshore O&M scenarios. The design encompasses both hardware and software architectures, ensuring Nautilus's robustness and adaptability to the harsh maritime environment. It presents a compact hull capable of operating in moderate sea states (wave height up to 2.5 m), with a modular hardware and software architecture that is easily adapted to the mission requirements. It has a perception payload and communication system for edge and real-time computing, communicates with a Shore Control Center and allows beyond visual line-of-sight operations. The Nautilus software architecture aims to provide the necessary flexibility for different mission requirements to offer a unified software architecture for O&M operations. Nautilus's capabilities were validated through the professional testing process of the ATLANTIS Test Center, involving operations in both near-real and real-world environments. This validation process culminated in Nautilus's reaching a Technology Readiness Level 8 and became the first ASV to execute autonomous tasks at a floating offshore wind farm located in the Atlantic.

2023

ArTuga: A novel multimodal fiducial marker for aerial robotics

Autores
Claro, RM; Silva, DB; Pinto, AM;

Publicação
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
For Vertical Take-Off and Landing Unmanned Aerial Vehicles (VTOL UAVs) to operate autonomously and effectively, it is mandatory to endow them with precise landing abilities. The UAV has to be able to detect the landing target and to perform the landing maneuver without compromising its own safety and the integrity of its surroundings. However, current UAVs do not present the required robustness and reliability for precise landing in highly demanding scenarios, particularly due to their inadequacy to perform accordingly under challenging lighting and weather conditions, including in day and night operations.This work proposes a multimodal fiducial marker, named ArTuga (Augmented Reality Tag for Unmanned vision-Guided Aircraft), capable of being detected by an heterogeneous perception system for accurate and precise landing in challenging environments and daylight conditions. This research combines photometric and radiometric information by proposing a real-time multimodal fusion technique that ensures a robust and reliable detection of the landing target in severe environments.Experimental results using a real multicopter UAV show that the system was able to detect the proposed marker in adverse conditions (such as at different heights, with intense sunlight and in dark environments). The obtained average accuracy for position estimation at 1 m height was of 0.0060 m with a standard deviation of 0.0003 m. Precise landing tests obtained an average deviation of 0.027 m from the proposed marker, with a standard deviation of 0.026 m. These results demonstrate the relevance of the proposed system for the precise landing in adverse conditions, such as in day and night operations with harsh weather conditions.(c) 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

2023

End-to-End Detection of a Landing Platform for Offshore UAVs Based on a Multimodal Early Fusion Approach

Autores
Neves, FS; Claro, RM; Pinto, AM;

Publicação
SENSORS

Abstract
A perception module is a vital component of a modern robotic system. Vision, radar, thermal, and LiDAR are the most common choices of sensors for environmental awareness. Relying on singular sources of information is prone to be affected by specific environmental conditions (e.g., visual cameras are affected by glary or dark environments). Thus, relying on different sensors is an essential step to introduce robustness against various environmental conditions. Hence, a perception system with sensor fusion capabilities produces the desired redundant and reliable awareness critical for real-world systems. This paper proposes a novel early fusion module that is reliable against individual cases of sensor failure when detecting an offshore maritime platform for UAV landing. The model explores the early fusion of a still unexplored combination of visual, infrared, and LiDAR modalities. The contribution is described by suggesting a simple methodology that intends to facilitate the training and inference of a lightweight state-of-the-art object detector. The early fusion based detector achieves solid detection recalls up to 99% for all cases of sensor failure and extreme weather conditions such as glary, dark, and foggy scenarios in fair real-time inference duration below 6 ms.

Teses
supervisionadas

2023

Robust Perception System for Autonomous Precise Landing of UAVs in Offshore Wind Farms

Autor
Rafael Marques Claro

Instituição
UP-FEUP

2023

Perception-based Autonomous Underwater Vehicle Navigation for Close-range Inspection of Offshore Structures

Autor
Renato Jorge Moreira Silva

Instituição
UP-FEUP

2023

Ego Motion from Video Data in Autonomous Vehicles

Autor
João Basto do Rosário

Instituição
UP-FEUP

2023

An Intelligent Retention System for Unmanned Aerial Vehicles on a Dynamic Platform

Autor
Lourenço Sousa de Pinho

Instituição
UP-FEUP

2023

Edge Intelligence for Deep-sea Robotic Seafloor Perception and Awareness

Autor
Gabriel da Silva Martins Loureiro

Instituição
UP-FEUP