2023
Authors
Pinheiro, I; Aguiar, A; Figueiredo, A; Pinho, T; Valente, A; Santos, F;
Publication
APPLIED SCIENCES-BASEL
Abstract
Currently, Unmanned Aerial Vehicles (UAVs) are considered in the development of various applications in agriculture, which has led to the expansion of the agricultural UAV market. However, Nano Aerial Vehicles (NAVs) are still underutilised in agriculture. NAVs are characterised by a maximum wing length of 15 centimetres and a weight of fewer than 50 g. Due to their physical characteristics, NAVs have the advantage of being able to approach and perform tasks with more precision than conventional UAVs, making them suitable for precision agriculture. This work aims to contribute to an open-source solution known as Nano Aerial Bee (NAB) to enable further research and development on the use of NAVs in an agricultural context. The purpose of NAB is to mimic and assist bees in the context of pollination. We designed this open-source solution by taking into account the existing state-of-the-art solution and the requirements of pollination activities. This paper presents the relevant background and work carried out in this area by analysing papers on the topic of NAVs. The development of this prototype is rather complex given the interactions between the different hardware components and the need to achieve autonomous flight capable of pollination. We adequately describe and discuss these challenges in this work. Besides the open-source NAB solution, we train three different versions of YOLO (YOLOv5, YOLOv7, and YOLOR) on an original dataset (Flower Detection Dataset) containing 206 images of a group of eight flowers and a public dataset (TensorFlow Flower Dataset), which must be annotated (TensorFlow Flower Detection Dataset). The results of the models trained on the Flower Detection Dataset are shown to be satisfactory, with YOLOv7 and YOLOR achieving the best performance, with 98% precision, 99% recall, and 98% F1 score. The performance of these models is evaluated using the TensorFlow Flower Detection Dataset to test their robustness. The three YOLO models are also trained on the TensorFlow Flower Detection Dataset to better understand the results. In this case, YOLOR is shown to obtain the most promising results, with 84% precision, 80% recall, and 82% F1 score. The results obtained using the Flower Detection Dataset are used for NAB guidance for the detection of the relative position in an image, which defines the NAB execute command.
2023
Authors
Martins, JJ; Silva, M; Santos, F;
Publication
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1
Abstract
To produce more food and tackle the labor scarcity, agriculture needs safer robots for repetitive and unsafe tasks (such as spraying). The interaction between humans and robots presents some challenges to ensure a certifiable safe collaboration between human-robot, a reliable system that does not damage goods and plants, in a context where the environment is mostly dynamic, due to the constant environment changes. A well-known solution to this problem is the implementation of real-time collision avoidance systems. This paper presents a global overview about state of the art methods implemented in the agricultural environment that ensure human-robot collaboration according to recognised industry standards. To complement are addressed the gaps and possible specifications that need to be clarified in future standards, taking into consideration the human-machine safety requirements for agricultural autonomous mobile robots.
2023
Authors
Rodrigues, L; Magalhaes, SA; da Silva, DQ; dos Santos, FN; Cunha, M;
Publication
AGRONOMY-BASEL
Abstract
The efficiency of agricultural practices depends on the timing of their execution. Environmental conditions, such as rainfall, and crop-related traits, such as plant phenology, determine the success of practices such as irrigation. Moreover, plant phenology, the seasonal timing of biological events (e.g., cotyledon emergence), is strongly influenced by genetic, environmental, and management conditions. Therefore, assessing the timing the of crops' phenological events and their spatiotemporal variability can improve decision making, allowing the thorough planning and timely execution of agricultural operations. Conventional techniques for crop phenology monitoring, such as field observations, can be prone to error, labour-intensive, and inefficient, particularly for crops with rapid growth and not very defined phenophases, such as vegetable crops. Thus, developing an accurate phenology monitoring system for vegetable crops is an important step towards sustainable practices. This paper evaluates the ability of computer vision (CV) techniques coupled with deep learning (DL) (CV_DL) as tools for the dynamic phenological classification of multiple vegetable crops at the subfield level, i.e., within the plot. Three DL models from the Single Shot Multibox Detector (SSD) architecture (SSD Inception v2, SSD MobileNet v2, and SSD ResNet 50) and one from You Only Look Once (YOLO) architecture (YOLO v4) were benchmarked through a custom dataset containing images of eight vegetable crops between emergence and harvest. The proposed benchmark includes the individual pairing of each model with the images of each crop. On average, YOLO v4 performed better than the SSD models, reaching an F1-Score of 85.5%, a mean average precision of 79.9%, and a balanced accuracy of 87.0%. In addition, YOLO v4 was tested with all available data approaching a real mixed cropping system. Hence, the same model can classify multiple vegetable crops across the growing season, allowing the accurate mapping of phenological dynamics. This study is the first to evaluate the potential of CV_DL for vegetable crops' phenological research, a pivotal step towards automating decision support systems for precision horticulture.
2023
Authors
Pinheiro, I; Moreira, G; da Silva, DQ; Magalhaes, S; Valente, A; Oliveira, PM; Cunha, M; Santos, F;
Publication
AGRONOMY-BASEL
Abstract
The world wine sector is a multi-billion dollar industry with a wide range of economic activities. Therefore, it becomes crucial to monitor the grapevine because it allows a more accurate estimation of the yield and ensures a high-quality end product. The most common way of monitoring the grapevine is through the leaves (preventive way) since the leaves first manifest biophysical lesions. However, this does not exclude the possibility of biophysical lesions manifesting in the grape berries. Thus, this work presents three pre-trained YOLO models (YOLOv5x6, YOLOv7-E6E, and YOLOR-CSP-X) to detect and classify grape bunches as healthy or damaged by the number of berries with biophysical lesions. Two datasets were created and made publicly available with original images and manual annotations to identify the complexity between detection (bunches) and classification (healthy or damaged) tasks. The datasets use the same 10,010 images with different classes. The Grapevine Bunch Detection Dataset uses the Bunch class, and The Grapevine Bunch Condition Detection Dataset uses the OptimalBunch and DamagedBunch classes. Regarding the three models trained for grape bunches detection, they obtained promising results, highlighting YOLOv7 with 77% of mAP and 94% of the F1-score. In the case of the task of detection and identification of the state of grape bunches, the three models obtained similar results, with YOLOv5 achieving the best ones with an mAP of 72% and an F1-score of 92%.
2023
Authors
Tinoco, V; Silva, MF; Santos, FN; Magalhaes, S; Morais, R;
Publication
2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC
Abstract
The increasing world population, growing need for agricultural products, and labour shortages have driven the growth of robotics in agriculture. Tasks such as fruit harvesting require extensive hours of work during harvest periods and can be physically exhausting. Autonomous robots bring more efficiency to agricultural tasks with the possibility of working continuously. This paper proposes a stackable 3 DoF SCARA manipulator for tomato harvesting. The manipulator uses a custom electronic circuit to control DC motors with an endless gear at each joint and uses a camera and a Tensor Processing Unit (TPU) for fruit detection. Cascaded PID controllers are used to control the joints with magnetic encoders for rotational feedback, and a time-of-flight sensor for prismatic movement feedback. Tomatoes are detected using an algorithm that finds regions of interest with the red colour present and sends these regions of interest to an image classifier that evaluates whether or not a tomato is present. With this, the system calculates the position of the tomato using stereo vision obtained from a monocular camera combined with the prismatic movement of the manipulator. As a result, the manipulator was able to position itself very close to the target in less than 3 seconds, where an end-effector could adjust its position for the picking.
2023
Authors
Silva, FM; Queirós, C; Pinho, T; Boaventura, J; Santos, F; Barroso, TG; Pereira, MR; Cunha, M; Martins, RC;
Publication
SENSORS AND ACTUATORS B-CHEMICAL
Abstract
Nutrient quantification in hydroponic systems is essential. Reagent-less spectral quantification of nitrogen, phosphate and potassium faces challenges in accessing information-rich spectral signals and unscrambling interference from each constituent. Herein, we introduce information equivalence between spectra and sample composition, enabling extraction of consistent covariance to isolate nutrient-specific spectral information (N, P or K) in Hoagland nutrient solutions using orthogonal covariance modes. Chemometrics methods quantify nitrogen and potassium, but not phosphate. Orthogonal covariance modes, however, enable quantification of all three nutrients: nitrogen (N) with R = 0.9926 and standard error of 17.22 ppm, phosphate (P) with R = 0.9196 and standard error of 63.62 ppm, and potassium (K) with R = 0.9975 and standard error of 9.51 ppm. Including pH information significantly improves phosphate quantification (R = 0.9638, standard error: 43.16 ppm). Results demonstrate a direct relationship between spectra and Hoagland nutrient solution information, preserving NPK orthogonality and supporting orthogonal covariance modes. These modes enhance detection sensitivity by maximizing information of the constituent being quantified, while minimizing interferences from others. Orthogonal covariance modes predicted nitrogen (R = 0.9474, standard error: 29.95 ppm) accurately. Phosphate and potassium showed strong interference from contaminants, but most extrapolation samples were correctly diagnosed above the reference interval (83.26%). Despite potassium features outside the knowledge base, a significant correlation was obtained (R = 0.6751). Orthogonal covariance modes use unique N, P or K information for quantification, not spurious correlations due to fertilizer composition. This approach minimizes interferences during extrapolation to complex samples, a crucial step towards resilient nutrient management in hydroponics using spectroscopy.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.