2020
Authors
Teixeira, B; Silva, H; Matos, A; Silva, E;
Publication
IEEE ACCESS
Abstract
This paper addresses Visual Odometry (VO) estimation in challenging underwater scenarios. Robot visual-based navigation faces several additional difficulties in the underwater context, which severely hinder both its robustness and the possibility for persistent autonomy in underwater mobile robots using visual perception capabilities. In this work, some of the most renown VO and Visual Simultaneous Localization and Mapping (v-SLAM) frameworks are tested on underwater complex environments, assessing the extent to which they are able to perform accurately and reliably on robotic operational mission scenarios. The fundamental issue of precision, reliability and robustness to multiple different operational scenarios, coupled with the rise in predominance of Deep Learning architectures in several Computer Vision application domains, has prompted a great a volume of recent research concerning Deep Learning architectures tailored for visual odometry estimation. In this work, the performance and accuracy of Deep Learning methods on the underwater context is also benchmarked and compared to classical methods. Additionally, an extension of current work is proposed, in the form of a visual-inertial sensor fusion network aimed at correcting visual odometry estimate drift. Anchored on a inertial supervision learning scheme, our network managed to improve upon trajectory estimates, producing both metrically better estimates as well as more visually consistent trajectory shape mimicking.
2020
Authors
Pinto, AM; Matos, AC;
Publication
INFORMATION FUSION
Abstract
This article presents an innovative hybrid imaging system that provides dense and accurate 3D information from harsh underwater environments. The proposed system is called MARESye and captures the advantages of both active and passive imaging methods: multiple light stripe range (LSR) and a photometric stereo (PS) technique, respectively. This hybrid approach fuses information from these techniques through a data-driven formulation to extend the measurement range and to produce high density 3D estimations in dynamic underwater environments. This hybrid system is driven by a gating timing approach to reduce the impact of several photometric issues related to the underwater environments such as, diffuse reflection, water turbidity and non-uniform illumination. Moreover, MARESye synchronizes and matches the acquisition of images with sub-sea phenomena which leads to clear pictures (with a high signal-to-noise ratio). Results conducted in realistic environments showed that MARESye is able to provide reliable, high density and accurate 3D data. Moreover, the experiments demonstrated that the performance of MARESye is less affected by sub-sea conditions since the SSIM index was 0.655 in high turbidity waters. Conventional imaging techniques obtained 0.328 in similar testing conditions. Therefore, the proposed system represents a valuable contribution for the inspection of maritime structures as well as for the navigation procedures of autonomous underwater vehicles during close range operations.
2020
Authors
Figueiredo, AB; Matos, AC;
Publication
APPLIED SCIENCES-BASEL
Abstract
This paper presents a high performance (low computationally demanding) monocular vision-based system for a hovering Autonomous Underwater Vehicle (AUV) in the context of autonomous docking process-MViDO system: Monocular Vision-based Docking Operation aid. The MViDO consists of three sub-modules: a pose estimator, a tracker and a guidance sub-module. The system is based on a single camera and a three spherical color markers target that signal the docking station. The MViDO system allows the pose estimation of the three color markers even in situations of temporary occlusions, being also a system that rejects outliers and false detections. This paper also describes the design and implementation of the MViDO guidance module for the docking manoeuvres. We address the problem of driving the AUV to a docking station with the help of the visual markers detected by the on-board camera, and show that by adequately choosing the references for the linear degrees of freedom of the AUV, the AUV is conducted to the dock while keeping those markers in the field of view of the on-board camera. The main concepts behind the MViDO are provided and a complete characterization of the developed system is presented from the formal and experimental point of view. To test and evaluate the MViDO detector and pose an estimator module, we created a ground truth setup. To test and evaluate the tracker module we used the MARES AUV and the designed target in a four-meter tank. The performance of the proposed guidance law was tested on simulink/Matlab.
2020
Authors
Campos, DF; Matos, A; Pinto, AM;
Publication
2020 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2020)
Abstract
The offshore wind power industry is an emerging and exponentially growing sector, which calls to a necessity for a cyclical monitoring and inspection to ensure the safety and efficiency of the wind farm facilities. Thus, the multiple domains of the environment must be reconstructed, namely the emersed (aerial) and immersed (underwater) domains, to depict as much as possible the offshore structures from the wind turbines to the cable arrays. This work proposes the use of an Autonomous Surface Vehicle (ASV) to map both environments simultaneously producing a multi-domain map through the fusion of navigational sensors, GPS and IMU, to localize the vehicle and aid the registration process for the perception sensors, 3D Lidar and Multibeam echosounder sonar. The performed experiments demonstrate the ability of the multi-domain mapping architecture to provide an accurate reconstruction of both scenarios into a single representation using the odometry system as the initial seed to further improve the map with data filtering and registration processes.
2020
Authors
Teixeira, FB; Moreira, N; Abreu, N; Ferreira, B; Ricardo, M; Campos, R;
Publication
2020 16TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS (WIMOB)
Abstract
The use of Autonomous Underwater Vehicles (AUVs) is increasingly seen as a cost-effective way to carry out underwater missions. Due to their long endurance and set of sensors onboard, AUVs may collect large amounts of data, in the order of Gbytes, which need to be transferred to shore. State of the art wireless technologies suffer either from low bitrates or limited range. Since surfacing may be unpractical, especially for deep sea operations, long-range underwater data transfer is limited to the use of low bitrate acoustic communications, precluding the timely transmission of large amounts of data. The use of data mules combined with short-range, high bitrate RF or optical communications has been proposed as a solution to overcome the problem. In this paper we describe the implementation and validation of UDMSim, a simulation platform for underwater data muling oriented systems that combines an AUV simulator and the Network Simulator 3 (ns-3). The results presented in this paper show a good match between UDMSim, a theoretical model, and the experimental results obtained by using an underwater testbed when no localization errors exist. When these errors are present, the simulator is able to reproduce the navigation of AUVs that act as data mules, adjust the throughput, and simulate the signal and connection losses that the theoretical model can not predict, but that will occur in reality. UDMSim is made available to the community to support easy and faster evaluation of data muling oriented underwater communications solutions, and enable offline replication of real world experiments.
2020
Authors
Fernandes, D; Pinheiro, F; Dias, A; Martins, A; Almeida, J; Silva, E;
Publication
ROBOTICS IN EDUCATION: CURRENT RESEARCH AND INNOVATIONS
Abstract
Teaching robotics based on challenge of our daily lives is always more motivating for students and teachers. Several competitions of self-driving have emerged recently, challenging students and researchers to develop solutions addressing the autonomous driving systems. The Portuguese Festival Nacional de Rob ' otica (FNR) Autonomous Driving Competition is one of those examples. Even though the competition is an exciting challenger, it requires the development of real robots, which implies several limitations that may discourage the students and compromise a fluid teaching process. The simulation can contribute to overcome this limitation and can assume an important role as a tool, providing an effortless and costless solution, allowing students and researchers to keep their focus on the main issues. This paper presents a simulation environment for FNR, providing an overall framework able to support the exploration of robotics topics like perception, navigation, data fusion and deep learning based on the autonomous driving competition.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.