2020
Autores
Santos, LC; Aguiar, AS; Santos, FN; Valente, A; Petry, M;
Publicação
ROBOTICS
Abstract
Robotics will significantly impact large sectors of the economy with relatively low productivity, such as Agri-Food production. Deploying agricultural robots on the farm is still a challenging task. When it comes to localising the robot, there is a need for a preliminary map, which is obtained from a first robot visit to the farm. Mapping is a semi-autonomous task that requires a human operator to drive the robot throughout the environment using a control pad. Visual and geometric features are used by Simultaneous Localisation and Mapping (SLAM) Algorithms to model and recognise places, and track the robot's motion. In agricultural fields, this represents a time-consuming operation. This work proposes a novel solution-called AgRoBPP-bridge-to autonomously extract Occupancy Grid and Topological maps from satellites images. These preliminary maps are used by the robot in its first visit, reducing the need of human intervention and making the path planning algorithms more efficient. AgRoBPP-bridge consists of two stages: vineyards row detection and topological map extraction. For vineyards row detection, we explored two approaches, one that is based on conventional machine learning technique, by considering Support Vector Machine with Local Binary Pattern-based features, and another one found in deep learning techniques (ResNET and DenseNET). From the vineyards row detection, we extracted an occupation grid map and, by considering advanced image processing techniques and Voronoi diagrams concept, we obtained a topological map. Our results demonstrated an overall accuracy higher than 85% for detecting vineyards and free paths for robot navigation. The Support Vector Machine (SVM)-based approach demonstrated the best performance in terms of precision and computational resources consumption. AgRoBPP-bridge shows to be a relevant contribution to simplify the deployment of robots in agriculture.
2020
Autores
Aguiar, AS; dos Santos, FN; Cunha, JB; Sobreira, H; Sousa, AJ;
Publicação
ROBOTICS
Abstract
Research and development of autonomous mobile robotic solutions that can perform several active agricultural tasks (pruning, harvesting, mowing) have been growing. Robots are now used for a variety of tasks such as planting, harvesting, environmental monitoring, supply of water and nutrients, and others. To do so, robots need to be able to perform online localization and, if desired, mapping. The most used approach for localization in agricultural applications is based in standalone Global Navigation Satellite System-based systems. However, in many agricultural and forest environments, satellite signals are unavailable or inaccurate, which leads to the need of advanced solutions independent from these signals. Approaches like simultaneous localization and mapping and visual odometry are the most promising solutions to increase localization reliability and availability. This work leads to the main conclusion that, few methods can achieve simultaneously the desired goals of scalability, availability, and accuracy, due to the challenges imposed by these harsh environments. In the near future, novel contributions to this field are expected that will help one to achieve the desired goals, with the development of more advanced techniques, based on 3D localization, and semantic and topological mapping. In this context, this work proposes an analysis of the current state-of-the-art of localization and mapping approaches in agriculture and forest environments. Additionally, an overview about the available datasets to develop and test these approaches is performed. Finally, a critical analysis of this research field is done, with the characterization of the literature using a variety of metrics.
2020
Autores
Santos, LC; de Aguiar, ASP; Santos, FN; Valente, A; Ventura, JB; Sousa, AJ;
Publicação
Intelligent Systems and Applications - Proceedings of the 2020 Intelligent Systems Conference, IntelliSys 2020, London, UK, September 3-4, 2020, Volume 1
Abstract
Agricultural robotics is nowadays a complex, challenging, and relevant research topic for the sustainability of our society. Some agricultural environments present harsh conditions to robotics operability. In the case of steep-slope vineyards, there are several robotic challenges: terrain irregularities, characteristics of illumination, and inaccuracy/unavailability of the Global Navigation Satellite System. Under these conditions, robotics navigation, mapping, and localization become a challenging task. Performing these tasks with safety and accuracy, a reliable and advanced Navigation stack for robots working in a steep slope vineyard is required. This paper presents the integration of several robotic components, path planning aware of robot centre of gravity and terrain slope, occupation grid map extraction from satellite images, a localization and mapping procedure based on high-level visual features reliable under GNSS signals blockage/missing, for steep-slope robots. © 2021, Springer Nature Switzerland AG.
2021
Autores
Aguiar, AS; dos Santos, FN; Sobreira, H; Cunha, JB; Sousa, AJ;
Publicação
ROBOTICS AND AUTONOMOUS SYSTEMS
Abstract
Developing safe autonomous robotic applications for outdoor agricultural environments is a research field that still presents many challenges. Simultaneous Localization and Mapping can be crucial to endow the robot to localize itself with accuracy and, consequently, perform tasks such as crop monitoring and harvesting autonomously. In these environments, the robotic localization and mapping systems usually benefit from the high density of visual features. When using filter-based solutions to localize the robot, such an environment usually uses a high number of particles to perform accurately. These two facts can lead to computationally expensive localization algorithms that are intended to perform in real-time. This work proposes a refinement step to a standard high-dimensional filter based localization solution through the novelty of downsampling the filter using an online clustering algorithm and applying a scan-match procedure to each cluster. Thus, this approach allows scan matchers without high computational cost, even in high dimensional filters. Experiments using real data in an agricultural environment show that this approach improves the Particle Filter performance estimating the robot pose. Additionally, results show that this approach can build a precise 3D reconstruction of agricultural environments using visual scans, i.e., 3D scans with RGB information.
2021
Autores
Aguiar, AS; Monteiro, NN; dos Santos, FN; Pires, EJS; Silva, D; Sousa, AJ; Boaventura Cunha, J;
Publicação
AGRICULTURE-BASEL
Abstract
The development of robotic solutions in unstructured environments brings several challenges, mainly in developing safe and reliable navigation solutions. Agricultural environments are particularly unstructured and, therefore, challenging to the implementation of robotics. An example of this is the mountain vineyards, built-in steep slope hills, which are characterized by satellite signal blockage, terrain irregularities, harsh ground inclinations, and others. All of these factors impose the implementation of precise and reliable navigation algorithms, so that robots can operate safely. This work proposes the detection of semantic natural landmarks that are to be used in Simultaneous Localization and Mapping algorithms. Thus, Deep Learning models were trained and deployed to detect vine trunks. As significant contributions, we made available a novel vine trunk dataset, called VineSet, which was constituted by more than 9000 images and respective annotations for each trunk. VineSet was used to train state-of-the-art Single Shot Multibox Detector models. Additionally, we deployed these models in an Edge-AI fashion and achieve high frame rate execution. Finally, an assisted annotation tool was proposed to make the process of dataset building easier and improve models incrementally. The experiments show that our trained models can detect trunks with an Average Precision up to 84.16% and our assisted annotation tool facilitates the annotation process, even in other areas of agriculture, such as orchards and forests. Additional experiments were performed, where the impact of the amount of training data and the comparison between using Transfer Learning and training from scratch were evaluated. In these cases, some theoretical assumptions were verified.
2021
Autores
da Silva, DQ; Aguiar, AS; dos Santos, FN; Sousa, AJ; Rabino, D; Biddoccu, M; Bagagiolo, G; Delmastro, M;
Publicação
AGRICULTURE-BASEL
Abstract
Smart and precision agriculture concepts require that the farmer measures all relevant variables in a continuous way and processes this information in order to build better prescription maps and to predict crop yield. These maps feed machinery with variable rate technology to apply the correct amount of products in the right time and place, to improve farm profitability. One of the most relevant information to estimate the farm yield is the Leaf Area Index. Traditionally, this index can be obtained from manual measurements or from aerial imagery: the former is time consuming and the latter requires the use of drones or aerial services. This work presents an optical sensing-based hardware module that can be attached to existing autonomous or guided terrestrial vehicles. During the normal operation, the module collects periodic geo-referenced monocular images and laser data. With that data a suggested processing pipeline, based on open-source software and composed by Structure from Motion, Multi-View Stereo and point cloud registration stages, can extract Leaf Area Index and other crop-related features. Additionally, in this work, a benchmark of software tools is made. The hardware module and pipeline were validated considering real data acquired in two vineyards-Portugal and Italy. A dataset with sensory data collected by the module was made publicly available. Results demonstrated that: the system provides reliable and precise data on the surrounding environment and the pipeline is capable of computing volume and occupancy area from the acquired data.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.