2022
Authors
Magalhaes, SA; Moreira, AP; dos Santos, FN; Dias, J;
Publication
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS
Abstract
This paper studies the state-of-the-art of active perception solutions for manipulation in agriculture and suggests a possible architecture for an active perception system for harvesting in agriculture. Research and developing robots for agricultural context is a challenge, particularly for harvesting and pruning context applications. These applications normally consider mobile manipulators and their cognitive part has many challenges. Active perception systems look reasonable approach for fruit assessment robustly and economically. This systematic literature review focus in the topic of active perception for fruits harvesting robots. The search was performed in five different databases. The search resumed into 1034 publications from which only 195 publications where considered for inclusion in this review after analysis. We conclude that the most of researches are mainly about fruit detection and segmentation in two-dimensional space using evenly classic computer vision strategies and deep learning models. For harvesting, multiple viewpoint and visual servoing are the most commonly used strategies. The research of these last topics does not look robust yet, and require further analysis and improvements for better results on fruit harvesting.
2022
Authors
Oliveira, F; Tinoco, V; Magalhaes, S; Santos, FN; Silva, MF;
Publication
2022 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)
Abstract
There has been an increase in the variety of harvesting manipulators. However, sometimes the lack of efficiency of these manipulators makes it difficult to compete with harvesting tasks performed by humans. One of the key components of these manipulators is the end-effector, responsible for picking the fruits from the plant. This paper studies different types of end-effectors used by some harvesting manipulators and compares them. The objective is to analyse their advantages and limitations to better understand the requirements to design an end-effector to improve the performance of a custom Selective Compliance Assembly Robot Arm (SCARA) on the harvest of different types of fruits.
2022
Authors
Tinoco, V; Silva, MF; Santos, FN; Valente, A; Rocha, LF; Magalhaes, SA; Santos, LC;
Publication
INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION
Abstract
Purpose The motivation for robotics research in the agricultural field has sparked in consequence of the increasing world population and decreasing agricultural labor availability. This paper aims to analyze the state of the art of pruning and harvesting manipulators used in agriculture. Design/methodology/approach A research was performed on papers that corresponded to specific keywords. Ten papers were selected based on a set of attributes that made them adequate for review. Findings The pruning manipulators were used in two different scenarios: grapevines and apple trees. These manipulators showed that a light-controlled environment could reduce visual errors and that prismatic joints on the manipulator are advantageous to obtain a higher reach. The harvesting manipulators were used for three types of fruits: strawberries, tomatoes and apples. These manipulators revealed that different kinematic configurations are required for different kinds of end-effectors, as some of these tools only require movement in the horizontal axis and others are required to reach the target with a broad range of orientations. Originality/value This work serves to reduce the gap in the literature regarding agricultural manipulators and will support new developments of novel solutions related to agricultural robotic grasping and manipulation.
2023
Authors
Magalhaes, SC; Castro, L; Rodrigues, L; Padilha, TC; de Carvalho, F; dos Santos, FN; Pinho, T; Moreira, G; Cunha, J; Cunha, M; Silva, P; Moreira, AP;
Publication
IEEE SENSORS JOURNAL
Abstract
Several thousand grapevine varieties exist, with even more naming identifiers. Adequate specialized labor is not available for proper classification or identification of grapevines, making the value of commercial vines uncertain. Traditional methods, such as genetic analysis or ampelometry, are time-consuming, expensive, and often require expert skills that are even rarer. New vision-based systems benefit from advanced and innovative technology and can be used by nonexperts in ampelometry. To this end, deep learning (DL) and machine learning (ML) approaches have been successfully applied for classification purposes. This work extends the state of the art by applying digital ampelometry techniques to larger grapevine varieties. We benchmarked MobileNet v2, ResNet-34, and VGG-11-BN DL classifiers to assess their ability for digital ampelography. In our experiment, all the models could identify the vines' varieties through the leaf with a weighted F1 score higher than 92%.
2023
Authors
Magalhaes, SC; dos Santos, FN; Machado, P; Moreira, AP; Dias, J;
Publication
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Abstract
Purpose: Visual perception enables robots to perceive the environment. Visual data is processed using computer vision algorithms that are usually time-expensive and require powerful devices to process the visual data in real-time, which is unfeasible for open-field robots with limited energy. This work benchmarks the performance of different heterogeneous platforms for object detection in real-time. This research benchmarks three architectures: embedded GPU-Graphical Processing Units (such as NVIDIA Jetson Nano 2 GB and 4 GB, and NVIDIA Jetson TX2), TPU-Tensor Processing Unit (such as Coral Dev Board TPU), and DPU-Deep Learning Processor Unit (such as in AMD-Xilinx ZCU104 Development Board, and AMD-Xilinx Kria KV260 Starter Kit). Methods: The authors used the RetinaNet ResNet-50 fine-tuned using the natural VineSet dataset. After the trained model was converted and compiled for target-specific hardware formats to improve the execution efficiency.Conclusions and Results: The platforms were assessed in terms of performance of the evaluation metrics and efficiency (time of inference). Graphical Processing Units (GPUs) were the slowest devices, running at 3 FPS to 5 FPS, and Field Programmable Gate Arrays (FPGAs) were the fastest devices, running at 14 FPS to 25 FPS. The efficiency of the Tensor Processing Unit (TPU) is irrelevant and similar to NVIDIA Jetson TX2. TPU and GPU are the most power-efficient, consuming about 5 W. The performance differences, in the evaluation metrics, across devices are irrelevant and have an F1 of about 70 % and mean Average Precision (mAP) of about 60 %.
2023
Authors
Rodrigues, L; Magalhaes, SA; da Silva, DQ; dos Santos, FN; Cunha, M;
Publication
AGRONOMY-BASEL
Abstract
The efficiency of agricultural practices depends on the timing of their execution. Environmental conditions, such as rainfall, and crop-related traits, such as plant phenology, determine the success of practices such as irrigation. Moreover, plant phenology, the seasonal timing of biological events (e.g., cotyledon emergence), is strongly influenced by genetic, environmental, and management conditions. Therefore, assessing the timing the of crops' phenological events and their spatiotemporal variability can improve decision making, allowing the thorough planning and timely execution of agricultural operations. Conventional techniques for crop phenology monitoring, such as field observations, can be prone to error, labour-intensive, and inefficient, particularly for crops with rapid growth and not very defined phenophases, such as vegetable crops. Thus, developing an accurate phenology monitoring system for vegetable crops is an important step towards sustainable practices. This paper evaluates the ability of computer vision (CV) techniques coupled with deep learning (DL) (CV_DL) as tools for the dynamic phenological classification of multiple vegetable crops at the subfield level, i.e., within the plot. Three DL models from the Single Shot Multibox Detector (SSD) architecture (SSD Inception v2, SSD MobileNet v2, and SSD ResNet 50) and one from You Only Look Once (YOLO) architecture (YOLO v4) were benchmarked through a custom dataset containing images of eight vegetable crops between emergence and harvest. The proposed benchmark includes the individual pairing of each model with the images of each crop. On average, YOLO v4 performed better than the SSD models, reaching an F1-Score of 85.5%, a mean average precision of 79.9%, and a balanced accuracy of 87.0%. In addition, YOLO v4 was tested with all available data approaching a real mixed cropping system. Hence, the same model can classify multiple vegetable crops across the growing season, allowing the accurate mapping of phenological dynamics. This study is the first to evaluate the potential of CV_DL for vegetable crops' phenological research, a pivotal step towards automating decision support systems for precision horticulture.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.