2020
Autores
Teixeira, B; Silva, H; Matos, A; Silva, E;
Publicação
IEEE ACCESS
Abstract
This paper addresses Visual Odometry (VO) estimation in challenging underwater scenarios. Robot visual-based navigation faces several additional difficulties in the underwater context, which severely hinder both its robustness and the possibility for persistent autonomy in underwater mobile robots using visual perception capabilities. In this work, some of the most renown VO and Visual Simultaneous Localization and Mapping (v-SLAM) frameworks are tested on underwater complex environments, assessing the extent to which they are able to perform accurately and reliably on robotic operational mission scenarios. The fundamental issue of precision, reliability and robustness to multiple different operational scenarios, coupled with the rise in predominance of Deep Learning architectures in several Computer Vision application domains, has prompted a great a volume of recent research concerning Deep Learning architectures tailored for visual odometry estimation. In this work, the performance and accuracy of Deep Learning methods on the underwater context is also benchmarked and compared to classical methods. Additionally, an extension of current work is proposed, in the form of a visual-inertial sensor fusion network aimed at correcting visual odometry estimate drift. Anchored on a inertial supervision learning scheme, our network managed to improve upon trajectory estimates, producing both metrically better estimates as well as more visually consistent trajectory shape mimicking.
2020
Autores
Pinto, AM; Matos, AC;
Publicação
INFORMATION FUSION
Abstract
This article presents an innovative hybrid imaging system that provides dense and accurate 3D information from harsh underwater environments. The proposed system is called MARESye and captures the advantages of both active and passive imaging methods: multiple light stripe range (LSR) and a photometric stereo (PS) technique, respectively. This hybrid approach fuses information from these techniques through a data-driven formulation to extend the measurement range and to produce high density 3D estimations in dynamic underwater environments. This hybrid system is driven by a gating timing approach to reduce the impact of several photometric issues related to the underwater environments such as, diffuse reflection, water turbidity and non-uniform illumination. Moreover, MARESye synchronizes and matches the acquisition of images with sub-sea phenomena which leads to clear pictures (with a high signal-to-noise ratio). Results conducted in realistic environments showed that MARESye is able to provide reliable, high density and accurate 3D data. Moreover, the experiments demonstrated that the performance of MARESye is less affected by sub-sea conditions since the SSIM index was 0.655 in high turbidity waters. Conventional imaging techniques obtained 0.328 in similar testing conditions. Therefore, the proposed system represents a valuable contribution for the inspection of maritime structures as well as for the navigation procedures of autonomous underwater vehicles during close range operations.
2019
Autores
Pessoa L.M.; Duarte C.; Salgado H.M.; Correia V.; Ferreira B.; Cruz N.A.; Matos A.;
Publicação
OCEANS 2019 - Marseille, OCEANS Marseille 2019
Abstract
In this paper we evaluate the long-term deployment feasibility of a large-scale network of abandoned underwater sensors, where power is provided by autonomous underwater vehicles (AUVs) in periodic visits.
2020
Autores
Figueiredo, AB; Matos, AC;
Publicação
APPLIED SCIENCES-BASEL
Abstract
This paper presents a high performance (low computationally demanding) monocular vision-based system for a hovering Autonomous Underwater Vehicle (AUV) in the context of autonomous docking process-MViDO system: Monocular Vision-based Docking Operation aid. The MViDO consists of three sub-modules: a pose estimator, a tracker and a guidance sub-module. The system is based on a single camera and a three spherical color markers target that signal the docking station. The MViDO system allows the pose estimation of the three color markers even in situations of temporary occlusions, being also a system that rejects outliers and false detections. This paper also describes the design and implementation of the MViDO guidance module for the docking manoeuvres. We address the problem of driving the AUV to a docking station with the help of the visual markers detected by the on-board camera, and show that by adequately choosing the references for the linear degrees of freedom of the AUV, the AUV is conducted to the dock while keeping those markers in the field of view of the on-board camera. The main concepts behind the MViDO are provided and a complete characterization of the developed system is presented from the formal and experimental point of view. To test and evaluate the MViDO detector and pose an estimator module, we created a ground truth setup. To test and evaluate the tracker module we used the MARES AUV and the designed target in a four-meter tank. The performance of the proposed guidance law was tested on simulink/Matlab.
2019
Autores
Nunes A.; Gaspar A.R.; Matos A.;
Publicação
OCEANS 2019 - Marseille, OCEANS Marseille 2019
Abstract
Nowadays, ocean exploration is far from complete and the development of suitable recognition systems are crucial, to allow that the robots perform inspection and monitoring tasks in diverse conditions. The online available datasets are incomplete for these kinds of scenarios and, so it is important to build datasets that covered real condition in a simulated environment. Thus, it was developed a dataset with some man-made objects presents in the underwater environment. Moreover, it is also presented the developed method (Convolutional Neural Network) and its evaluation in diverse conditions is performed. It is also presented a comparative analysis and a discussion between the proposed algorithm and the ResNet architecture. The obtained results showed that the developed method is appropriate to classify 7 critical different objects with good performance.
2020
Autores
Campos, DF; Matos, A; Pinto, AM;
Publicação
2020 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2020)
Abstract
The offshore wind power industry is an emerging and exponentially growing sector, which calls to a necessity for a cyclical monitoring and inspection to ensure the safety and efficiency of the wind farm facilities. Thus, the multiple domains of the environment must be reconstructed, namely the emersed (aerial) and immersed (underwater) domains, to depict as much as possible the offshore structures from the wind turbines to the cable arrays. This work proposes the use of an Autonomous Surface Vehicle (ASV) to map both environments simultaneously producing a multi-domain map through the fusion of navigational sensors, GPS and IMU, to localize the vehicle and aid the registration process for the perception sensors, 3D Lidar and Multibeam echosounder sonar. The performed experiments demonstrate the ability of the multi-domain mapping architecture to provide an accurate reconstruction of both scenarios into a single representation using the odometry system as the initial seed to further improve the map with data filtering and registration processes.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.