2024
Authors
Martins, J; Pereira, P; Campilho, R; Pinto, A;
Publication
MESA 2024 - 20th International Conference on Mechatronic, Embedded Systems and Applications, Proceedings
Abstract
Due to the difficult access to the maritime environment, cooperation between different robotic platforms operating in different domains provides numerous advantages when considering Operations and Maintenance (O&M) missions. The nest Uncrewed Surface Vehicle (USV) is equipped with a parallel platform, serving as a landing pad for Uncrewed Aerial Vehicle (UAV) landings in dynamic sea states. This work proposes a methodology for short term forecasting of wave-behaviour using Fast Fourier Transforms (FFT) and a low-pass Butterworth filter to filter out noise readings from the Inertial Measurement Unit (IMU) and applying an Auto-Regressive (AR) model for the forecast, showing good results within an almost 10-second window. These predictions are then used in a Model Predictive Control (MPC) approach to optimize trajectory planning of the landing pad roll and pitch, in order to increase horizontality, consistently mitigating around 80% of the wave induced motion. ©2024 IEEE.
2024
Authors
Neves, S; Branco, M; Pereira, I; Claro, M; Pinto, M;
Publication
MESA 2024 - 20th International Conference on Mechatronic, Embedded Systems and Applications, Proceedings
Abstract
In the field of autonomous Unmanned Aerial Vehicles (UAVs) landing, conventional approaches fall short in delivering not only the required precision but also the resilience against environmental disturbances. Yet, learning-based algorithms can offer promising solutions by leveraging their ability to learn the intelligent behaviour from data. On one hand, this paper introduces a novel multimodal transformer-based Deep Learning detector, that can provide reliable positioning for precise autonomous landing. It surpasses standard approaches by addressing individual sensor limitations, achieving high reliability even in diverse weather and sensor failure conditions. It was rigorously validated across varying environments, achieving optimal true positive rates and average precisions of up to 90%. On the other hand, it is proposed a Reinforcement Learning (RL) decision-making model, based on a Deep Q-Network (DQN) rationale. Initially trained in simulation, its adaptive behaviour is successfully transferred and validated in a real outdoor scenario. Furthermore, this approach demonstrates rapid inference times of approximately 5ms, validating its applicability on edge devices. ©2024 IEEE.
2024
Authors
Claro, RM; Neves, FSP; Pinto, AMG;
Publication
Abstract
2024
Authors
Agostinho, L; Pereira, D; Hiolle, A; Pinto, A;
Publication
ROBOTICS AND AUTONOMOUS SYSTEMS
Abstract
Ego -motion estimation plays a critical role in autonomous driving systems by providing accurate and timely information about the vehicle's position and orientation. To achieve high levels of accuracy and robustness, it is essential to leverage a range of sensor modalities to account for highly dynamic and diverse scenes, and consequent sensor limitations. In this work, we introduce TEFu-Net, a Deep -Learning -based late fusion architecture that combines multiple ego -motion estimates from diverse data modalities, including stereo RGB, LiDAR point clouds and GNSS/IMU measurements. Our approach is non -parametric and scalable, making it adaptable to different sensor set configurations. By leveraging a Long Short -Term Memory (LSTM), TEFu-Net produces reliable and robust spatiotemporal ego -motion estimates. This capability allows it to filter out erroneous input measurements, ensuring the accuracy of the car's motion calculations over time. Extensive experiments show an average accuracy increase of 63% over TEFu-Net's input estimators and on par results with the state-of-the-art in real -world driving scenarios. We also demonstrate that our solution can achieve accurate estimates under sensor or input failure. Therefore, TEFu-Net enhances the accuracy and robustness of ego -motion estimation in real -world driving scenarios, particularly in challenging conditions such as cluttered environments, tunnels, dense vegetation, and unstructured scenes. As a result of these enhancements, it bolsters the reliability of autonomous driving functions.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.