2021
Authors
Agostinho, LR; Ricardo, NC; Silva, RJ; Pinto, AM;
Publication
2021 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)
Abstract
In recent years, autonomous underwater vehicles (AUVs) have gained prominence in the most varied fields of application of underwater missions. The most common solution for recharging their batteries still implies removing them from the water, which is exceptionally costly. The use of Inductive Power Transfer (IPT) technologies allows to mitigate the associated costs and to extend the vehicles' operation time. In consequence, a prototype has been developed, whose objective is to integrate commercially available IPT technologies, while allowing the employment by most of the AUVs. The prototype is a funnel structure and its counterpart aimed to be fixed to a docking station and the AUV respectively. When coupled, it enables the batteries to recharge by electromagnetic induction. Energy transmission has been tested, resulting in encouraging results. This particular solution achieved over 90% efficiency during underwater experiments. The next objective will be to develop a commercial version of the prototype, that allows a direct, practical and effective use of wireless charging technologies within this particular scenario.
2021
Authors
Pereira, MI; Leite, PN; Pinto, AM;
Publication
MARINE TECHNOLOGY SOCIETY JOURNAL
Abstract
The maritime industry has been following the paradigm shift toward the automation of typically intelligent procedures, with research regarding autonomous surface vehicles (ASVs) having seen an upward trend in recent years. However, this type of vehicle cannot be employed on a full scale until a few challenges are solved. For example, the docking process of an ASV is still a demanding task that currently requires human intervention. This research work proposes a volumetric convolutional neural network (vCNN) for the detection of docking structures from 3-D data, developed according to a balance between precision and speed. Another contribution of this article is a set of synthetically generated data regarding the context of docking structures. The dataset is composed of LiDAR point clouds, stereo images, GPS, and Inertial Measurement Unit (IMU) information. Several robustness tests carried out with different levels of Gaussian noise demonstrated an average accuracy of 93.34% and a deviation of 5.46% for the worst case. Furthermore, the system was fine-tuned and evaluated in a real commercial harbor, achieving an accuracy of over 96%. The developed classifier is able to detect different types of structures and works faster than other state-of-the-art methods that establish their performance in real environments.
2021
Authors
Pereira, MI; Claro, RM; Leite, PN; Pinto, AM;
Publication
IEEE ACCESS
Abstract
The automation of typically intelligent and decision-making processes in the maritime industry leads to fewer accidents and more cost-effective operations. However, there are still lots of challenges to solve until fully autonomous systems can be employed. Artificial Intelligence (AI) has played a major role in this paradigm shift and shows great potential for solving some of these challenges, such as the docking process of an autonomous vessel. This work proposes a lightweight volumetric Convolutional Neural Network (vCNN) capable of recognizing different docking-based structures using 3D data in real-time. A synthetic-to-real domain adaptation approach is also proposed to accelerate the training process of the vCNN. This approach makes it possible to greatly decrease the cost of data acquisition and the need for advanced computational resources. Extensive experiments demonstrate an accuracy of over 90% in the recognition of different docking structures, using low resolution sensors. The inference time of the system was about 120ms on average. Results obtained using a real Autonomous Surface Vehicle (ASV) demonstrated that the vCNN trained with the synthetic-to-real domain adaptation approach is suitable for maritime mobile robots. This novel AI recognition method, combined with the utilization of 3D data, contributes to an increased robustness of the docking process regarding environmental constraints, such as rain and fog, as well as insufficient lighting in nighttime operations.
2020
Authors
Pereira, MI; Leite, PN; Pinto, AM;
Publication
GLOBAL OCEANS 2020: SINGAPORE - U.S. GULF COAST
Abstract
In recent years, research concerning the operation of Autonomous Surface Vehicles (ASVs) has seen an upward trend, although the full-scale application of this type of vehicles still encounters diverse limitations. In particular, the docking and undocking processes of an ASV are tasks that currently require human intervention. Aiming to take one step further towards enabling a vessel to dock autonomously, this article presents a Deep Learning approach to detect a docking structure in the environment surrounding the vessel. The work also included the acquisition of a dataset composed of LiDAR scans and RGB images, along with IMU and GPS information, obtained in simulation. The developed network achieved an accuracy of 95.99%, being robust to several degrees of Gaussian noise, with an average accuracy of 9334% and a deviation of 5.46% for the worst case.
2021
Authors
Leite, PN; Pinto, AM;
Publication
IEEE ACCESS
Abstract
Understanding the surrounding 3D scene is of the utmost importance for many robotic applications. The rapid evolution of machine learning techniques has enabled impressive results when depth is extracted from a single image. High-latency networks are required to achieve these performances, rendering them unusable for time-constrained applications. This article introduces a lightweight Convolutional Neural Network (CNN) for depth estimation, NEON, designed for balancing both accuracy and inference times. Instead of solely focusing on visual features, the proposed methodology exploits the Motion-Parallax effect to combine the apparent motion of pixels with texture. This research demonstrates that motion perception provides crucial insight about the magnitude of movement for each pixel, which also encodes cues about depth since large displacements usually occur when objects are closer to the imaging sensor. NEON's performance is compared to relevant networks in terms of Root Mean Squared Error (RMSE), the percentage of correctly predicted pixels (delta(1)) and inference times, using the KITTI dataset. Experiments prove that NEON is significantly more efficient than the current top ranked network, estimating predictions 12 times faster; while achieving an average RMSE of 3.118 m and a delta(1) of 94.5%. Ablation studies demonstrate the relevance of tailoring the network to use motion perception principles in estimating depth from image sequences, considering that the effectiveness and quality of the estimated depth map is similar to more computational demanding state-of-the-art networks. Therefore, this research proposes a network that can be integrated in robotic applications, where computational resources and processing-times are important constraints, enabling tasks such as obstacle avoidance, object recognition and robotic grasping.
2021
Authors
Pinto A.M.; Marques J.V.A.; Campos D.F.; Abreu N.; Matos A.; Jussi M.; Berglund R.; Halme J.; Tikka P.; Formiga J.; Verrecchia C.; Langiano S.; Santos C.; Sa N.; Stoker J.J.; Calderoni F.; Govindaraj S.; But A.; Gale L.; Ribas D.; Hurtos N.; Vidal E.; Ridao P.; Chieslak P.; Palomeras N.; Barberis S.; Aceto L.;
Publication
Oceans Conference Record (IEEE)
Abstract
The ATLANTIS project aims to establish a pioneer pilot infrastructure that will allow the demonstration of key enabling robotic technologies for inspection and maintenance of offshore wind farms. The pilot will be implemented in Viana do Castelo, Portugal, and will allow for testing, validation and demonstration of technologies with a range of technology readiness level, in near-real/real environments.The demonstration of robotic technologies can promote the transition from traditional inspection and maintenance methodologies towards automated robotic strategies, that remove or reduce the need of human-in-the-loop, reducing costs and improving the safety of interventions. Eight scenarios, split into four showcases, will be used to determine the required developments for robotic integration and demonstrate the applicability in the inspection and maintenance processes. The scenarios considered were identified by end-users as key areas for robotics.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.