2022
Authors
Santos, LC; Santos, FN; Aguiar, AS; Valente, A; Costa, P;
Publication
2022 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)
Abstract
Robotics will play an essential role in agriculture. Deploying agricultural robots on the farm is still a challenging task due to the terrain's irregularity and size. Optimal path planning solutions may fail in larger terrains due to memory requirements as the search space increases. This work presents a novel open-source solution called AgRob Topologic Path Planner, which is capable of performing path planning operations using a hybrid map with topological and metric representations. A local A* algorithm pre-plans and saves local paths in local metric maps, saving them into the topological structure. Then, a graph-based A* performs a global search in the topological map, using the saved local paths to provide the full trajectory. Our results demonstrate that this solution could handle large maps (5 hectares) using just 0.002 % of the search space required by a previous solution.
2022
Authors
Oliveira, M; Pedrosa, E; de Aguiar, AP; Rato, DFPD; dos Santos, FN; Dias, P; Santos, V;
Publication
EXPERT SYSTEMS WITH APPLICATIONS
Abstract
The fusion of data from different sensors often requires that an accurate geometric transformation between the sensors is known. The procedure by which these transformations are estimated is known as sensor calibration. The vast majority of calibration approaches focus on specific pairwise combinations of sensor modalities, unsuitable to calibrate robotic systems containing multiple sensors of varied modalities. This paper presents a novel calibration methodology which is applicable to multi-sensor, multi-modal robotic systems. The approach formulates the calibration as an extended optimization problem, in which the poses of the calibration patterns are also estimated. It makes use of a topological representation of the coordinate frames in the system, in order to recalculate the poses of the sensors throughout the optimization. Sensor poses are retrieved from the combination of geometric transformations which are atomic, in the sense that they are indivisible. As such, we refer to this approach as ATOM - Atomic Transformations Optimization Method. This makes the approach applicable to different calibration problems, such as sensor to sensor, sensor in motion, or sensor to coordinate frame. Additionally, the proposed approach provides advanced functionalities, integrated into ROS, designed to support the several stages of a complete calibration procedure. Results covering several robotic platforms and a large spectrum of calibration problems show that the methodology is in fact general, and achieves calibrations which are as accurate as the ones provided by state of the art methods designed to operate only for specific combinations of pairwise modalities.
2023
Authors
Pinheiro, I; Aguiar, A; Figueiredo, A; Pinho, T; Valente, A; Santos, F;
Publication
APPLIED SCIENCES-BASEL
Abstract
Currently, Unmanned Aerial Vehicles (UAVs) are considered in the development of various applications in agriculture, which has led to the expansion of the agricultural UAV market. However, Nano Aerial Vehicles (NAVs) are still underutilised in agriculture. NAVs are characterised by a maximum wing length of 15 centimetres and a weight of fewer than 50 g. Due to their physical characteristics, NAVs have the advantage of being able to approach and perform tasks with more precision than conventional UAVs, making them suitable for precision agriculture. This work aims to contribute to an open-source solution known as Nano Aerial Bee (NAB) to enable further research and development on the use of NAVs in an agricultural context. The purpose of NAB is to mimic and assist bees in the context of pollination. We designed this open-source solution by taking into account the existing state-of-the-art solution and the requirements of pollination activities. This paper presents the relevant background and work carried out in this area by analysing papers on the topic of NAVs. The development of this prototype is rather complex given the interactions between the different hardware components and the need to achieve autonomous flight capable of pollination. We adequately describe and discuss these challenges in this work. Besides the open-source NAB solution, we train three different versions of YOLO (YOLOv5, YOLOv7, and YOLOR) on an original dataset (Flower Detection Dataset) containing 206 images of a group of eight flowers and a public dataset (TensorFlow Flower Dataset), which must be annotated (TensorFlow Flower Detection Dataset). The results of the models trained on the Flower Detection Dataset are shown to be satisfactory, with YOLOv7 and YOLOR achieving the best performance, with 98% precision, 99% recall, and 98% F1 score. The performance of these models is evaluated using the TensorFlow Flower Detection Dataset to test their robustness. The three YOLO models are also trained on the TensorFlow Flower Detection Dataset to better understand the results. In this case, YOLOR is shown to obtain the most promising results, with 84% precision, 80% recall, and 82% F1 score. The results obtained using the Flower Detection Dataset are used for NAB guidance for the detection of the relative position in an image, which defines the NAB execute command.
2022
Authors
Aguiar, AS; dos Santos, FN; Santos, LC; Sousa, AJ; Boaventura Cunha, J;
Publication
JOURNAL OF FIELD ROBOTICS
Abstract
Robotics in agriculture faces several challenges, such as the unstructured characteristics of the environments, variability of luminosity conditions for perception systems, and vast field extensions. To implement autonomous navigation systems in these conditions, robots should be able to operate during large periods and travel long trajectories. For this reason, it is essential that simultaneous localization and mapping algorithms can perform in large-scale and long-term operating conditions. One of the main challenges for these methods is maintaining low memory resources while mapping extensive environments. This work tackles this issue, proposing a localization and mapping approach called VineSLAM that uses a topological mapping architecture to manage the memory resources required by the algorithm. This topological map is a graph-based structure where each node is agnostic to the type of data stored, enabling the creation of a multilayer mapping procedure. Also, a localization algorithm is implemented, which interacts with the topological map to perform access and search operations. Results show that our approach is aligned with the state-of-the-art regarding localization precision, being able to compute the robot pose in long and challenging trajectories in agriculture. In addition, we prove that the topological approach innovates the state-of-the-art memory management. The proposed algorithm requires less memory than the other benchmarked algorithms, and can maintain a constant memory allocation during the entire operation. This consists of a significant innovation, since our approach opens the possibility for the deployment of complex 3D SLAM algorithms in real-world applications without scale restrictions.
2024
Authors
Sarmento, J; dos Santos, FN; Aguiar, AS; Filipe, V; Valente, A;
Publication
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS
Abstract
Human-robot collaboration (HRC) is becoming increasingly important in advanced production systems, such as those used in industries and agriculture. This type of collaboration can contribute to productivity increase by reducing physical strain on humans, which can lead to reduced injuries and improved morale. One crucial aspect of HRC is the ability of the robot to follow a specific human operator safely. To address this challenge, a novel methodology is proposed that employs monocular vision and ultra-wideband (UWB) transceivers to determine the relative position of a human target with respect to the robot. UWB transceivers are capable of tracking humans with UWB transceivers but exhibit a significant angular error. To reduce this error, monocular cameras with Deep Learning object detection are used to detect humans. The reduction in angular error is achieved through sensor fusion, combining the outputs of both sensors using a histogram-based filter. This filter projects and intersects the measurements from both sources onto a 2D grid. By combining UWB and monocular vision, a remarkable 66.67% reduction in angular error compared to UWB localization alone is achieved. This approach demonstrates an average processing time of 0.0183s and an average localization error of 0.14 meters when tracking a person walking at an average speed of 0.21 m/s. This novel algorithm holds promise for enabling efficient and safe human-robot collaboration, providing a valuable contribution to the field of robotics.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.