2017
Authors
Cruz, N; Abreu, N; Almeida, J; Almeida, R; Alves, J; Dias, A; Ferreira, B; Ferreira, H; Gonçalves, C; Martins, A; Melo, J; Pinto, A; Pinto, V; Silva, A; Silva, H; Matos, A; Silva, E;
Publication
OCEANS 2017 - ANCHORAGE
Abstract
This paper describes the PISCES system, an integrated approach for fully autonomous mapping of large areas of the ocean in deep waters. A deep water AUV will use an acoustic navigation system to compute is position with bounded error. The range limitation will be overcome by a moving baseline scheme, with the acoustic sources installed in robotic surface vessels with previously combined trajectories. In order to save power, all systems will have synchronized clocks and implement the One Way Travel Time scheme. The mapping system will be a combination of an off-the-shelf MBES with a new long range bathymetry system, with a source on a moving surface vessel and the receivers on board the AUV. The system is being prepared to participate in round one of the XPRIZE challenge.
2019
Authors
Freitas, S; Silva, H; Almeida, JM; Silva, E;
Publication
INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
Abstract
This work addresses a hyperspectral imaging system for maritime surveillance using unmanned aerial vehicles. The objective was to detect the presence of vessels using purely spatial and spectral hyperspectral information. To accomplish this objective, we implemented a novel 3-D convolutional neural network approach and compared against two implementations of other state-of-the-art methods: spectral angle mapper and hyperspectral derivative anomaly detection. The hyperspectral imaging system was developed during the SUNNY project, and the methods were tested using data collected during the project final demonstration, in Sao Jacinto Air Force Base, Aveiro (Portugal). The obtained results show that a 3-D CNN is able to improve the recall value, depending on the class, by an interval between 27% minimum, to a maximum of over 40%, when compared to spectral angle mapper and hyperspectral derivative anomaly detection approaches. Proving that 3-D CNN deep learning techniques that combine spectral and spatial information can be used to improve the detection of targets classification accuracy in hyperspectral imaging unmanned aerial vehicles maritime surveillance applications.
2019
Authors
Teixeira, B; Silva, H; Matos, A; Silva, E;
Publication
OCEANS 2019 MTS/IEEE SEATTLE
Abstract
This paper address the use of deep learning approaches for visual based navigation in confined underwater environments. State-of-the-art algorithms have shown the tremendous potential deep learning architectures can have for visual navigation implementations, though they are still mostly outperformed by classical feature-based techniques. In this work, we apply current state-of-the-art deep learning methods for visual-based robot navigation to the more challenging underwater environment, providing both an underwater visual dataset acquired in real operational mission scenarios and an assessment of state-of-the-art algorithms on the underwater context. We extend current work by proposing a novel pose optimization architecture for the purpose of correcting visual odometry estimate drift using a Visual-Inertial fusion network, consisted of a neural network architecture anchored on an Inertial supervision learning scheme. Our Visual-Inertial Fusion Network was shown to improve results an average of 50% for trajectory estimates, also producing more visually consistent trajectory estimates for both our underwater application scenarios.
2020
Authors
Teixeira, B; Silva, H; Matos, A; Silva, E;
Publication
IEEE ACCESS
Abstract
This paper addresses Visual Odometry (VO) estimation in challenging underwater scenarios. Robot visual-based navigation faces several additional difficulties in the underwater context, which severely hinder both its robustness and the possibility for persistent autonomy in underwater mobile robots using visual perception capabilities. In this work, some of the most renown VO and Visual Simultaneous Localization and Mapping (v-SLAM) frameworks are tested on underwater complex environments, assessing the extent to which they are able to perform accurately and reliably on robotic operational mission scenarios. The fundamental issue of precision, reliability and robustness to multiple different operational scenarios, coupled with the rise in predominance of Deep Learning architectures in several Computer Vision application domains, has prompted a great a volume of recent research concerning Deep Learning architectures tailored for visual odometry estimation. In this work, the performance and accuracy of Deep Learning methods on the underwater context is also benchmarked and compared to classical methods. Additionally, an extension of current work is proposed, in the form of a visual-inertial sensor fusion network aimed at correcting visual odometry estimate drift. Anchored on a inertial supervision learning scheme, our network managed to improve upon trajectory estimates, producing both metrically better estimates as well as more visually consistent trajectory shape mimicking.
2021
Authors
Freitas, S; Silva, H; Silva, E;
Publication
REMOTE SENSING
Abstract
This paper addresses the development of a remote hyperspectral imaging system for detection and characterization of marine litter concentrations in an oceanic environment. The work performed in this paper is the following: (i) an in-situ characterization was conducted in an outdoor laboratory environment with the hyperspectral imaging system to obtain the spatial and spectral response of a batch of marine litter samples; (ii) a real dataset hyperspectral image acquisition was performed using manned and unmanned aerial platforms, of artificial targets composed of the material analyzed in the laboratory; (iii) comparison of the results (spatial and spectral response) obtained in laboratory conditions with the remote observation data acquired during the dataset flights; (iv) implementation of two different supervised machine learning methods, namely Random Forest (RF) and Support Vector Machines (SVM), for marine litter artificial target detection based on previous training. Obtained results show a marine litter automated detection capability with a 70-80% precision rate of detection in all three targets, compared to ground-truth pixels, as well as recall rates over 50%.
2021
Authors
Teixeira, B; Silva, H;
Publication
U.Porto Journal of Engineering
Abstract
Achieving persistent and reliable autonomy for mobile robots in challenging field mission scenarios is a long-time quest for the Robotics research community. Deep learning-based LIDAR odometry is attracting increasing research interest as a technological solution for the robot navigation problem and showing great potential for the task. In this work, an examination of the benefits of leveraging learning-based encoding representations of real-world data is provided. In addition, a broad perspective of emergent Deep Learning robust techniques to track motion and estimate scene structure for real-world applications is the focus of a deeper analysis and comprehensive comparison. Furthermore, existing Deep Learning approaches and techniques for point cloud odometry tasks are explored, and the main technological solutions are compared and discussed. Open challenges are also laid out for the reader, hopefully offering guidance to future researchers in their quest to apply deep learning to complex 3D non-matrix data to tackle localization and robot navigation problems.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.