2021
Autores
da Silva, DQ; dos Santos, FN; Sousa, AJ; Filipe, V; Boaventura Cunha, J;
Publicação
COMPUTATION
Abstract
Robotics navigation and perception for forest management are challenging due to the existence of many obstacles to detect and avoid and the sharp illumination changes. Advanced perception systems are needed because they can enable the development of robotic and machinery solutions to accomplish a smarter, more precise, and sustainable forestry. This article presents a state-of-the-art review about unimodal and multimodal perception in forests, detailing the current developed work about perception using a single type of sensors (unimodal) and by combining data from different kinds of sensors (multimodal). This work also makes a comparison between existing perception datasets in the literature and presents a new multimodal dataset, composed by images and laser scanning data, as a contribution for this research field. Lastly, a critical analysis of the works collected is conducted by identifying strengths and research trends in this domain.
2021
Autores
Reis-Pereira, M; Martins, RC; Silva, AF; Tavares, F; Santos, F; Cunha, M;
Publicação
Chemistry Proceedings
Abstract
2021
Autores
Mendonça, H; Lima, J; Costa, P; Moreira, AP; dos Santos, FN;
Publicação
Optimization, Learning Algorithms and Applications - First International Conference, OL2A 2021, Bragança, Portugal, July 19-21, 2021, Revised Selected Papers
Abstract
The COVID-19 virus outbreak led to the need of developing smart disinfection systems, not only to protect the people that usually frequent public spaces but also to protect those who have to subject themselves to the contaminated areas. In this paper it is developed a human detector smart sensor for autonomous disinfection mobile robot that use Ultra Violet C type light for the disinfection task and stops the disinfection system when a human is detected around the robot in all directions. UVC light is dangerous for humans and thus the need for a human detection system that will protect them by disabling the disinfection process, as soon as a person is detected. This system uses a Raspberry Pi Camera with a Single Shot Detector (SSD) Mobilenet neural network to identify and detect persons. It also has a FLIR 3.5 Thermal camera that measures temperatures that are used to detect humans when within a certain range of temperatures. The normal human skin temperature is the reference value for the range definition. The results show that the fusion of both sensors data improves the system performance, compared to when the sensors are used individually. One of the tests performed proves that the system is able to distinguish a person in a picture from a real person by fusing the thermal camera and the visible light camera data. The detection results validate the proposed system.
2021
Autores
Barroso, TG; Ribeiro, L; Gregório, H; Santos, F; Martins, RC;
Publicação
Chemistry Proceedings
Abstract
2021
Autores
Barroso, TG; Ribeiro, L; Gregório, H; Santos, F; Martins, RC;
Publicação
Chemistry Proceedings
Abstract
2022
Autores
Aguiar, AS; dos Santos, FN; Sobreira, H; Boaventura Cunha, J; Sousa, AJ;
Publicação
FRONTIERS IN ROBOTICS AND AI
Abstract
Developing ground robots for agriculture is a demanding task. Robots should be capable of performing tasks like spraying, harvesting, or monitoring. However, the absence of structure in the agricultural scenes challenges the implementation of localization and mapping algorithms. Thus, the research and development of localization techniques are essential to boost agricultural robotics. To address this issue, we propose an algorithm called VineSLAM suitable for localization and mapping in agriculture. This approach uses both point- and semiplane-features extracted from 3D LiDAR data to map the environment and localize the robot using a novel Particle Filter that considers both feature modalities. The numeric stability of the algorithm was tested using simulated data. The proposed methodology proved to be suitable to localize a robot using only three orthogonal semiplanes. Moreover, the entire VineSLAM pipeline was compared against a state-of-the-art approach considering three real-world experiments in a woody-crop vineyard. Results show that our approach can localize the robot with precision even in long and symmetric vineyard corridors outperforming the state-of-the-art algorithm in this context.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.