Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Daniel Queirós da Silva was born in Ermesinde, Porto, Portugal, in 1997. He obtained the M.Sc. degree in Electrical and Computer Engineering from the Faculty of Engineering of the University of Porto (FEUP) in 2020, and the Ph.D. degree in Electrical and Computer Engineering from the University of Trás-os-Montes and Alto Douro (UTAD) in 2024. He is currently a Researcher at INESC Technology and Science (INESC TEC) and an Invited Assistant Lecturer at the School of Engineering of the Polytechnic of Porto (ISEP). His main research interests are perception systems, artificial intelligence, robotics and embedded systems.

Interest
Topics
Details

Details

  • Name

    Daniel Queirós Silva
  • Role

    Assistant Researcher
  • Since

    01st October 2020
007
Publications

2024

Assessing Soil Ripping Depth for Precision Forestry with a Cost-Effective Contactless Sensing System

Authors
da Silva, DQ; Louro, F; dos Santos, FN; Filipe, V; Sousa, AJ; Cunha, M; Carvalho, JL;

Publication
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2

Abstract
Forest soil ripping is a practice that involves revolving the soil in a forest area to prepare it for planting or sowing operations. Advanced sensing systems may help in this kind of forestry operation to assure ideal ripping depth and intensity, as these are important aspects that have potential to minimise the environmental impact of forest soil ripping. In this work, a cost-effective contactless system - capable of detecting and mapping soil ripping depth in real-time - was developed and tested in laboratory and in a realistic forest scenario. The proposed system integrates two single-point LiDARs and a GNSS sensor. To evaluate the system, ground-truth data was manually collected on the field during the operation of the machine with a ripping implement. The proposed solution was tested in real conditions, and the results showed that the ripping depth was estimated with minimal error. The accuracy and mapping ripping depth ability of the low-cost sensor justify their use to support improved soil preparation with machines or robots toward sustainable forest industry.

2024

YOLO-Based Tree Trunk Types Multispectral Perception: A Two-Genus Study at Stand-Level for Forestry Inventory Management Purposes

Authors
da Silva, DQ; Dos Santos, FN; Filipe, V; Sousa, AJ; Pires, EJS;

Publication
IEEE ACCESS

Abstract
Stand-level forest tree species perception and identification are needed for monitoring-related operations, being crucial for better biodiversity and inventory management in forested areas. This paper contributes to this knowledge domain by researching tree trunk types multispectral perception at stand-level. YOLOv5 and YOLOv8 - Convolutional Neural Networks specialized at object detection and segmentation - were trained to detect and segment two tree trunk genus (pine and eucalyptus) using datasets collected in a forest region in Portugal. The dataset comprises only two categories, which correspond to the two tree genus. The datasets were manually annotated for object detection and segmentation with RGB and RGB-NIR images, and are publicly available. The Small variant of YOLOv8 was the best model at detection and segmentation tasks, achieving an F1 measure above 87% and 62%, respectively. The findings of this study suggest that the use of extended spectra, including Visible and Near Infrared, produces superior results. The trained models can be integrated into forest tractors and robots to monitor forest genus across different spectra. This can assist forest managers in controlling their forest stands.

2024

Enhancing Grapevine Node Detection to Support Pruning Automation: Leveraging State-of-the-Art YOLO Detection Models for 2D Image Analysis

Authors
Oliveira, F; da Silva, DQ; Filipe, V; Pinho, TM; Cunha, M; Cunha, JB; dos Santos, FN;

Publication
Sensors

Abstract
Automating pruning tasks entails overcoming several challenges, encompassing not only robotic manipulation but also environment perception and detection. To achieve efficient pruning, robotic systems must accurately identify the correct cutting points. A possible method to define these points is to choose the cutting location based on the number of nodes present on the targeted cane. For this purpose, in grapevine pruning, it is required to correctly identify the nodes present on the primary canes of the grapevines. In this paper, a novel method of node detection in grapevines is proposed with four distinct state-of-the-art versions of the YOLO detection model: YOLOv7, YOLOv8, YOLOv9 and YOLOv10. These models were trained on a public dataset with images containing artificial backgrounds and afterwards validated on different cultivars of grapevines from two distinct Portuguese viticulture regions with cluttered backgrounds. This allowed us to evaluate the robustness of the algorithms on the detection of nodes in diverse environments, compare the performance of the YOLO models used, as well as create a publicly available dataset of grapevines obtained in Portuguese vineyards for node detection. Overall, all used models were capable of achieving correct node detection in images of grapevines from the three distinct datasets. Considering the trade-off between accuracy and inference speed, the YOLOv7 model demonstrated to be the most robust in detecting nodes in 2D images of grapevines, achieving F1-Score values between 70% and 86.5% with inference times of around 89 ms for an input size of 1280 × 1280 px. Considering these results, this work contributes with an efficient approach for real-time node detection for further implementation on an autonomous robotic pruning system.

2023

Tree Trunks Cross-Platform Detection Using Deep Learning Strategies for Forestry Operations

Authors
da Silva, DQ; dos Santos, FN; Filipe, V; Sousa, AJ;

Publication
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
To tackle wildfires and improve forest biomass management, cost effective and reliable mowing and pruning robots are required. However, the development of visual perception systems for forestry robotics needs to be researched and explored to achieve safe solutions. This paper presents two main contributions: an annotated dataset and a benchmark between edge-computing hardware and deep learning models. The dataset is composed by nearly 5,400 annotated images. This dataset enabled to train nine object detectors: four SSD MobileNets, one EfficientDet, three YOLO-based detectors and YOLOR. These detectors were deployed and tested on three edge-computing hardware (TPU, CPU and GPU), and evaluated in terms of detection precision and inference time. The results showed that YOLOR was the best trunk detector achieving nearly 90% F1 score and an inference average time of 13.7ms on GPU. This work will favour the development of advanced vision perception systems for robotics in forestry operations.

2023

Computer Vision and Deep Learning as Tools for Leveraging Dynamic Phenological Classification in Vegetable Crops

Authors
Rodrigues, L; Magalhaes, SA; da Silva, DQ; dos Santos, FN; Cunha, M;

Publication
AGRONOMY-BASEL

Abstract
The efficiency of agricultural practices depends on the timing of their execution. Environmental conditions, such as rainfall, and crop-related traits, such as plant phenology, determine the success of practices such as irrigation. Moreover, plant phenology, the seasonal timing of biological events (e.g., cotyledon emergence), is strongly influenced by genetic, environmental, and management conditions. Therefore, assessing the timing the of crops' phenological events and their spatiotemporal variability can improve decision making, allowing the thorough planning and timely execution of agricultural operations. Conventional techniques for crop phenology monitoring, such as field observations, can be prone to error, labour-intensive, and inefficient, particularly for crops with rapid growth and not very defined phenophases, such as vegetable crops. Thus, developing an accurate phenology monitoring system for vegetable crops is an important step towards sustainable practices. This paper evaluates the ability of computer vision (CV) techniques coupled with deep learning (DL) (CV_DL) as tools for the dynamic phenological classification of multiple vegetable crops at the subfield level, i.e., within the plot. Three DL models from the Single Shot Multibox Detector (SSD) architecture (SSD Inception v2, SSD MobileNet v2, and SSD ResNet 50) and one from You Only Look Once (YOLO) architecture (YOLO v4) were benchmarked through a custom dataset containing images of eight vegetable crops between emergence and harvest. The proposed benchmark includes the individual pairing of each model with the images of each crop. On average, YOLO v4 performed better than the SSD models, reaching an F1-Score of 85.5%, a mean average precision of 79.9%, and a balanced accuracy of 87.0%. In addition, YOLO v4 was tested with all available data approaching a real mixed cropping system. Hence, the same model can classify multiple vegetable crops across the growing season, allowing the accurate mapping of phenological dynamics. This study is the first to evaluate the potential of CV_DL for vegetable crops' phenological research, a pivotal step towards automating decision support systems for precision horticulture.