2025
Autores
Barbosa, S; Dias, N; Almeida, C; Amaral, G; Ferreira, A; Camilo, A; Silva, E;
Publicação
EARTH SYSTEM SCIENCE DATA
Abstract
A unique dataset of marine atmospheric electric field observations over the Atlantic Ocean is described. The data are relevant not only for atmospheric electricity studies, but more generally for studies of the Earth's atmosphere and climate variability, as well as space-Earth interaction studies. In addition to the atmospheric electric field data, the dataset includes simultaneous measurements of other atmospheric variables, including gamma radiation, visibility, and solar radiation. These ancillary observations not only support interpretation and understanding of the atmospheric electric field data, but also are of interest in themselves. The entire framework from data collection to final derived datasets has been duly documented to ensure traceability and reproducibility of the whole data curation chain. All the data, from raw measurements to final datasets, are preserved in data repositories with a corresponding assigned DOI. Final datasets are available from the Figshare repository (https://figshare.com/projects/SAIL_Data/178500, ), and computational notebooks containing the code used at every step of the data curation chain are available from the Zenodo repository (https://zenodo.org/communities/sail, Project SAIL community, 2025).
2025
Autores
Loureiro, G; Dias, A; Almeida, J; Martins, A; Silva, E;
Publicação
JOURNAL OF MARINE SCIENCE AND ENGINEERING
Abstract
Climate change has led to the need to transition to clean technologies, which depend on an number of critical metals. These metals, such as nickel, lithium, and manganese, are essential for developing batteries. However, the scarcity of these elements and the risks of disruptions to their supply chain have increased interest in exploiting resources on the deep seabed, particularly polymetallic nodules. As the identification of these nodules must be efficient to minimize disturbance to the marine ecosystem, deep learning techniques have emerged as a potential solution. Traditional deep learning methods are based on the use of convolutional layers to extract features, while recent architectures, such as transformer-based architectures, use self-attention mechanisms to obtain global context. This paper evaluates the performance of representative models from both categories across three tasks: detection, object segmentation, and semantic segmentation. The initial results suggest that transformer-based methods perform better in most evaluation metrics, but at the cost of higher computational resources. Furthermore, recent versions of You Only Look Once (YOLO) have obtained competitive results in terms of mean average precision.
2025
Autores
Amaral, G; Martins, JJ; Martins, P; Dias, A; Almeida, J; Silva, E;
Publicação
2025 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS, ICUAS
Abstract
The knowledge of the precise 3D position of a target in tracking applications is a fundamental requirement. The lack of a low-cost single sensor capable of providing the three-dimensional position (of a target) makes it necessary to use complementary sensors together. This research presents a Local Positioning System (LPS) for outdoor scenarios, based on a data fusion approach for unmodified UAV tracking, combining a vision sensor and mmWave radar. The proposed solution takes advantage of the radar's depth observation ability and the potential of a neural network for image processing. We have evaluated five data association approaches for radar data cluttered to get a reliable set of radar observations. The results demonstrated that the estimated target position is close to an exogenous ground truth obtained from a Visual Inertial Odometry (VIO) algorithm executed onboard the target UAV. Moreover, the developed system's architecture is prepared to be scalable, allowing the addition of other observation stations. It will increase the accuracy of the estimation and extend the actuation area. To the best of our knowledge, this is the first work that uses a mmWave radar combined with a camera and a machine learning algorithm to track a UAV in an outdoor scenario.
2025
Autores
Claro, RM; Neves, FSP; Pinto, AMG;
Publicação
JOURNAL OF FIELD ROBOTICS
Abstract
The integration of precise landing capabilities into unmanned aerial vehicles (UAVs) is crucial for enabling autonomous operations, particularly in challenging environments such as the offshore scenarios. This work proposes a heterogeneous perception system that incorporates a multimodal fiducial marker, designed to improve the accuracy and robustness of autonomous landing of UAVs in both daytime and nighttime operations. This work presents ViTAL-TAPE, a visual transformer-based model, that enhance the detection reliability of the landing target and overcomes the changes in the illumination conditions and viewpoint positions, where traditional methods fail. VITAL-TAPE is an end-to-end model that combines multimodal perceptual information, including photometric and radiometric data, to detect landing targets defined by a fiducial marker with 6 degrees-of-freedom. Extensive experiments have proved the ability of VITAL-TAPE to detect fiducial markers with an error of 0.01 m. Moreover, experiments using the RAVEN UAV, designed to endure the challenging weather conditions of offshore scenarios, demonstrated that the autonomous landing technology proposed in this work achieved an accuracy up to 0.1 m. This research also presents the first successful autonomous operation of a UAV in a commercial offshore wind farm with floating foundations installed in the Atlantic Ocean. These experiments showcased the system's accuracy, resilience and robustness, resulting in a precise landing technology that extends mission capabilities of UAVs, enabling autonomous and Beyond Visual Line of Sight offshore operations.
2025
Autores
Leite, PN; Pinto, AM;
Publicação
INFORMATION FUSION
Abstract
Underwater environments pose unique challenges to optical systems due to physical phenomena that induce severe data degradation. Current imaging sensors rarely address these effects comprehensively, resulting in the need to integrate complementary information sources. This article presents a multimodal data fusion approach to combine information from diverse sensing modalities into a single dense and accurate tridimensional representation. The proposed fusiNg tExture with apparent motion information for underwater Scene recOnstruction (NESO) encoder-decoder network leverages motion perception principles to extract relative depth cues, fusing them with textured information through an early fusion strategy. Evaluated on the FLSea-Stereo dataset, NESO outperforms state-of-the-art methods by 58.7%. Dense depth maps are achieved using multi-stage skip connections with attention mechanisms that ensure propagation of key features across network levels. This representation is further enhanced by incorporating sparse but millimeter-precise depth measurements from active imaging techniques. A regression-based algorithm maps depth displacements between these heterogeneous point clouds, using the estimated curves to refine the dense NESO prediction. This approach achieves relative errors as low as 0.41% when reconstructing submerged anode structures, accounting for metric improvements of up to 0.1124 m relative to the initial measurements. Validation at the ATLANTIS Coastal Testbed demonstrates the effectiveness of this multimodal fusion approach in obtaining robust tri-dimensional representations in real underwater conditions.
2025
Autores
Cusi, S; Martins, A; Tomasi, B; Puillat, I;
Publicação
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.