Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Filipe Neves Santos

2023

Toward Grapevine Digital Ampelometry Through Vision Deep Learning Models

Autores
Magalhaes, SC; Castro, L; Rodrigues, L; Padilha, TC; de Carvalho, F; dos Santos, FN; Pinho, T; Moreira, G; Cunha, J; Cunha, M; Silva, P; Moreira, AP;

Publicação
IEEE SENSORS JOURNAL

Abstract
Several thousand grapevine varieties exist, with even more naming identifiers. Adequate specialized labor is not available for proper classification or identification of grapevines, making the value of commercial vines uncertain. Traditional methods, such as genetic analysis or ampelometry, are time-consuming, expensive, and often require expert skills that are even rarer. New vision-based systems benefit from advanced and innovative technology and can be used by nonexperts in ampelometry. To this end, deep learning (DL) and machine learning (ML) approaches have been successfully applied for classification purposes. This work extends the state of the art by applying digital ampelometry techniques to larger grapevine varieties. We benchmarked MobileNet v2, ResNet-34, and VGG-11-BN DL classifiers to assess their ability for digital ampelography. In our experiment, all the models could identify the vines' varieties through the leaf with a weighted F1 score higher than 92%.

2023

Tree Trunks Cross-Platform Detection Using Deep Learning Strategies for Forestry Operations

Autores
da Silva, DQ; dos Santos, FN; Filipe, V; Sousa, AJ;

Publicação
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
To tackle wildfires and improve forest biomass management, cost effective and reliable mowing and pruning robots are required. However, the development of visual perception systems for forestry robotics needs to be researched and explored to achieve safe solutions. This paper presents two main contributions: an annotated dataset and a benchmark between edge-computing hardware and deep learning models. The dataset is composed by nearly 5,400 annotated images. This dataset enabled to train nine object detectors: four SSD MobileNets, one EfficientDet, three YOLO-based detectors and YOLOR. These detectors were deployed and tested on three edge-computing hardware (TPU, CPU and GPU), and evaluated in terms of detection precision and inference time. The results showed that YOLOR was the best trunk detector achieving nearly 90% F1 score and an inference average time of 13.7ms on GPU. This work will favour the development of advanced vision perception systems for robotics in forestry operations.

2023

Benchmarking edge computing devices for grape bunches and trunks detection using accelerated object detection single shot multibox deep learning models

Autores
Magalhaes, SC; dos Santos, FN; Machado, P; Moreira, AP; Dias, J;

Publicação
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE

Abstract
Purpose: Visual perception enables robots to perceive the environment. Visual data is processed using computer vision algorithms that are usually time-expensive and require powerful devices to process the visual data in real-time, which is unfeasible for open-field robots with limited energy. This work benchmarks the performance of different heterogeneous platforms for object detection in real-time. This research benchmarks three architectures: embedded GPU-Graphical Processing Units (such as NVIDIA Jetson Nano 2 GB and 4 GB, and NVIDIA Jetson TX2), TPU-Tensor Processing Unit (such as Coral Dev Board TPU), and DPU-Deep Learning Processor Unit (such as in AMD-Xilinx ZCU104 Development Board, and AMD-Xilinx Kria KV260 Starter Kit). Methods: The authors used the RetinaNet ResNet-50 fine-tuned using the natural VineSet dataset. After the trained model was converted and compiled for target-specific hardware formats to improve the execution efficiency.Conclusions and Results: The platforms were assessed in terms of performance of the evaluation metrics and efficiency (time of inference). Graphical Processing Units (GPUs) were the slowest devices, running at 3 FPS to 5 FPS, and Field Programmable Gate Arrays (FPGAs) were the fastest devices, running at 14 FPS to 25 FPS. The efficiency of the Tensor Processing Unit (TPU) is irrelevant and similar to NVIDIA Jetson TX2. TPU and GPU are the most power-efficient, consuming about 5 W. The performance differences, in the evaluation metrics, across devices are irrelevant and have an F1 of about 70 % and mean Average Precision (mAP) of about 60 %.

2023

Computer Vision and Deep Learning as Tools for Leveraging Dynamic Phenological Classification in Vegetable Crops

Autores
Rodrigues, L; Magalhaes, SA; da Silva, DQ; dos Santos, FN; Cunha, M;

Publicação
AGRONOMY-BASEL

Abstract
The efficiency of agricultural practices depends on the timing of their execution. Environmental conditions, such as rainfall, and crop-related traits, such as plant phenology, determine the success of practices such as irrigation. Moreover, plant phenology, the seasonal timing of biological events (e.g., cotyledon emergence), is strongly influenced by genetic, environmental, and management conditions. Therefore, assessing the timing the of crops' phenological events and their spatiotemporal variability can improve decision making, allowing the thorough planning and timely execution of agricultural operations. Conventional techniques for crop phenology monitoring, such as field observations, can be prone to error, labour-intensive, and inefficient, particularly for crops with rapid growth and not very defined phenophases, such as vegetable crops. Thus, developing an accurate phenology monitoring system for vegetable crops is an important step towards sustainable practices. This paper evaluates the ability of computer vision (CV) techniques coupled with deep learning (DL) (CV_DL) as tools for the dynamic phenological classification of multiple vegetable crops at the subfield level, i.e., within the plot. Three DL models from the Single Shot Multibox Detector (SSD) architecture (SSD Inception v2, SSD MobileNet v2, and SSD ResNet 50) and one from You Only Look Once (YOLO) architecture (YOLO v4) were benchmarked through a custom dataset containing images of eight vegetable crops between emergence and harvest. The proposed benchmark includes the individual pairing of each model with the images of each crop. On average, YOLO v4 performed better than the SSD models, reaching an F1-Score of 85.5%, a mean average precision of 79.9%, and a balanced accuracy of 87.0%. In addition, YOLO v4 was tested with all available data approaching a real mixed cropping system. Hence, the same model can classify multiple vegetable crops across the growing season, allowing the accurate mapping of phenological dynamics. This study is the first to evaluate the potential of CV_DL for vegetable crops' phenological research, a pivotal step towards automating decision support systems for precision horticulture.

2023

Deep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions

Autores
Pinheiro, I; Moreira, G; da Silva, DQ; Magalhaes, S; Valente, A; Oliveira, PM; Cunha, M; Santos, F;

Publicação
AGRONOMY-BASEL

Abstract
The world wine sector is a multi-billion dollar industry with a wide range of economic activities. Therefore, it becomes crucial to monitor the grapevine because it allows a more accurate estimation of the yield and ensures a high-quality end product. The most common way of monitoring the grapevine is through the leaves (preventive way) since the leaves first manifest biophysical lesions. However, this does not exclude the possibility of biophysical lesions manifesting in the grape berries. Thus, this work presents three pre-trained YOLO models (YOLOv5x6, YOLOv7-E6E, and YOLOR-CSP-X) to detect and classify grape bunches as healthy or damaged by the number of berries with biophysical lesions. Two datasets were created and made publicly available with original images and manual annotations to identify the complexity between detection (bunches) and classification (healthy or damaged) tasks. The datasets use the same 10,010 images with different classes. The Grapevine Bunch Detection Dataset uses the Bunch class, and The Grapevine Bunch Condition Detection Dataset uses the OptimalBunch and DamagedBunch classes. Regarding the three models trained for grape bunches detection, they obtained promising results, highlighting YOLOv7 with 77% of mAP and 94% of the F1-score. In the case of the task of detection and identification of the state of grape bunches, the three models obtained similar results, with YOLOv5 achieving the best ones with an mAP of 72% and an F1-score of 92%.

2023

2D LiDAR-Based System for Canopy Sensing in Smart Spraying Applications

Autores
Baltazar, AR; Dos Santos, FN; De Sousa, ML; Moreira, AP; Cunha, JB;

Publicação
IEEE ACCESS

Abstract
The efficient application of phytochemical products in agriculture is a complex issue that demands optimised sprayers and variable rate technologies, which rely on advanced sensing systems to address challenges such as overdosage and product losses. This work developed a system capable of processing different tree canopy parameters to support precision fruit farming and environmental protection using intelligent spraying methodologies. This system is based on a 2D light detection and ranging (LiDAR) sensor and a Global Navigation Satellite System (GNSS) receiver integrated into a sprayer driven by a tractor. The algorithm detects the canopy boundaries, allowing spray only in the presence of vegetation. The spray volume spared evaluates the system's performance compared to a Tree Row Volume (TRV) methodology. The results showed a 28% reduction in the overdosage of spraying product. The second step in this work was calculating and adjusting the amount of liquid to apply based on the tree volume. Considering this parameter, the saving obtained had an average value for the right and left rows of 78%. The volume of the trees was also monitored in a georeferenced manner with the creation of a occupation grid map. This map recorded the trajectory of the sprayer and the detected trees according to their volume.

  • 15
  • 21