Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CRIIS

2023

Quality Control of Casting Aluminum Parts: A Comparison of Deep Learning Models for Filings Detection

Autores
Nascimento, R; Ferreira, T; Rocha, C; Filipe, V; Silva, MF; Veiga, G; Rocha, L;

Publicação
2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC

Abstract
Quality control inspection systems are crucial and a key factor in maintaining and ensuring the integrity of any product. The quality inspection task is a repetitive task, when performed by operators only, it can be slow and susceptible to failures due to the lack of attention and fatigue. This work focuses on the inspection of parts made of high-pressure diecast aluminum for components of the automotive industry. In the present case study, last year, 18240 parts needed to be reinspected, requiring approximately 96 hours, a time that could be spent on other tasks. This article performs a comparison of four deep learning models: Faster R-CNN, RetinaNet, YOLOv7, and YOLOv7-tiny, to find out which one is more suited to perform the quality inspection task of detecting metal filings on casting aluminum parts. As for this use-case the prototype must be highly intolerant to False Negatives, that is, the part being defective and passing undetected, Faster R-CNN was considered the bestperforming model based on a Recall value of 96.00%.

2023

Nano Aerial Vehicles for Tree Pollination

Autores
Pinheiro, I; Aguiar, A; Figueiredo, A; Pinho, T; Valente, A; Santos, F;

Publicação
APPLIED SCIENCES-BASEL

Abstract
Currently, Unmanned Aerial Vehicles (UAVs) are considered in the development of various applications in agriculture, which has led to the expansion of the agricultural UAV market. However, Nano Aerial Vehicles (NAVs) are still underutilised in agriculture. NAVs are characterised by a maximum wing length of 15 centimetres and a weight of fewer than 50 g. Due to their physical characteristics, NAVs have the advantage of being able to approach and perform tasks with more precision than conventional UAVs, making them suitable for precision agriculture. This work aims to contribute to an open-source solution known as Nano Aerial Bee (NAB) to enable further research and development on the use of NAVs in an agricultural context. The purpose of NAB is to mimic and assist bees in the context of pollination. We designed this open-source solution by taking into account the existing state-of-the-art solution and the requirements of pollination activities. This paper presents the relevant background and work carried out in this area by analysing papers on the topic of NAVs. The development of this prototype is rather complex given the interactions between the different hardware components and the need to achieve autonomous flight capable of pollination. We adequately describe and discuss these challenges in this work. Besides the open-source NAB solution, we train three different versions of YOLO (YOLOv5, YOLOv7, and YOLOR) on an original dataset (Flower Detection Dataset) containing 206 images of a group of eight flowers and a public dataset (TensorFlow Flower Dataset), which must be annotated (TensorFlow Flower Detection Dataset). The results of the models trained on the Flower Detection Dataset are shown to be satisfactory, with YOLOv7 and YOLOR achieving the best performance, with 98% precision, 99% recall, and 98% F1 score. The performance of these models is evaluated using the TensorFlow Flower Detection Dataset to test their robustness. The three YOLO models are also trained on the TensorFlow Flower Detection Dataset to better understand the results. In this case, YOLOR is shown to obtain the most promising results, with 84% precision, 80% recall, and 82% F1 score. The results obtained using the Flower Detection Dataset are used for NAB guidance for the detection of the relative position in an image, which defines the NAB execute command.

2023

Safety Standards for Collision Avoidance Systems in Agricultural Robots - A Review

Autores
Martins, JJ; Silva, M; Santos, F;

Publicação
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
To produce more food and tackle the labor scarcity, agriculture needs safer robots for repetitive and unsafe tasks (such as spraying). The interaction between humans and robots presents some challenges to ensure a certifiable safe collaboration between human-robot, a reliable system that does not damage goods and plants, in a context where the environment is mostly dynamic, due to the constant environment changes. A well-known solution to this problem is the implementation of real-time collision avoidance systems. This paper presents a global overview about state of the art methods implemented in the agricultural environment that ensure human-robot collaboration according to recognised industry standards. To complement are addressed the gaps and possible specifications that need to be clarified in future standards, taking into consideration the human-machine safety requirements for agricultural autonomous mobile robots.

2023

Computer Vision and Deep Learning as Tools for Leveraging Dynamic Phenological Classification in Vegetable Crops

Autores
Rodrigues, L; Magalhaes, SA; da Silva, DQ; dos Santos, FN; Cunha, M;

Publicação
AGRONOMY-BASEL

Abstract
The efficiency of agricultural practices depends on the timing of their execution. Environmental conditions, such as rainfall, and crop-related traits, such as plant phenology, determine the success of practices such as irrigation. Moreover, plant phenology, the seasonal timing of biological events (e.g., cotyledon emergence), is strongly influenced by genetic, environmental, and management conditions. Therefore, assessing the timing the of crops' phenological events and their spatiotemporal variability can improve decision making, allowing the thorough planning and timely execution of agricultural operations. Conventional techniques for crop phenology monitoring, such as field observations, can be prone to error, labour-intensive, and inefficient, particularly for crops with rapid growth and not very defined phenophases, such as vegetable crops. Thus, developing an accurate phenology monitoring system for vegetable crops is an important step towards sustainable practices. This paper evaluates the ability of computer vision (CV) techniques coupled with deep learning (DL) (CV_DL) as tools for the dynamic phenological classification of multiple vegetable crops at the subfield level, i.e., within the plot. Three DL models from the Single Shot Multibox Detector (SSD) architecture (SSD Inception v2, SSD MobileNet v2, and SSD ResNet 50) and one from You Only Look Once (YOLO) architecture (YOLO v4) were benchmarked through a custom dataset containing images of eight vegetable crops between emergence and harvest. The proposed benchmark includes the individual pairing of each model with the images of each crop. On average, YOLO v4 performed better than the SSD models, reaching an F1-Score of 85.5%, a mean average precision of 79.9%, and a balanced accuracy of 87.0%. In addition, YOLO v4 was tested with all available data approaching a real mixed cropping system. Hence, the same model can classify multiple vegetable crops across the growing season, allowing the accurate mapping of phenological dynamics. This study is the first to evaluate the potential of CV_DL for vegetable crops' phenological research, a pivotal step towards automating decision support systems for precision horticulture.

2023

Deep Learning YOLO-Based Solution for Grape Bunch Detection and Assessment of Biophysical Lesions

Autores
Pinheiro, I; Moreira, G; da Silva, DQ; Magalhaes, S; Valente, A; Oliveira, PM; Cunha, M; Santos, F;

Publicação
AGRONOMY-BASEL

Abstract
The world wine sector is a multi-billion dollar industry with a wide range of economic activities. Therefore, it becomes crucial to monitor the grapevine because it allows a more accurate estimation of the yield and ensures a high-quality end product. The most common way of monitoring the grapevine is through the leaves (preventive way) since the leaves first manifest biophysical lesions. However, this does not exclude the possibility of biophysical lesions manifesting in the grape berries. Thus, this work presents three pre-trained YOLO models (YOLOv5x6, YOLOv7-E6E, and YOLOR-CSP-X) to detect and classify grape bunches as healthy or damaged by the number of berries with biophysical lesions. Two datasets were created and made publicly available with original images and manual annotations to identify the complexity between detection (bunches) and classification (healthy or damaged) tasks. The datasets use the same 10,010 images with different classes. The Grapevine Bunch Detection Dataset uses the Bunch class, and The Grapevine Bunch Condition Detection Dataset uses the OptimalBunch and DamagedBunch classes. Regarding the three models trained for grape bunches detection, they obtained promising results, highlighting YOLOv7 with 77% of mAP and 94% of the F1-score. In the case of the task of detection and identification of the state of grape bunches, the three models obtained similar results, with YOLOv5 achieving the best ones with an mAP of 72% and an F1-score of 92%.

2023

Design and Control Architecture of a Triple 3 DoF SCARA Manipulator for Tomato Harvesting

Autores
Tinoco, V; Silva, MF; Santos, FN; Magalhaes, S; Morais, R;

Publicação
2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC

Abstract
The increasing world population, growing need for agricultural products, and labour shortages have driven the growth of robotics in agriculture. Tasks such as fruit harvesting require extensive hours of work during harvest periods and can be physically exhausting. Autonomous robots bring more efficiency to agricultural tasks with the possibility of working continuously. This paper proposes a stackable 3 DoF SCARA manipulator for tomato harvesting. The manipulator uses a custom electronic circuit to control DC motors with an endless gear at each joint and uses a camera and a Tensor Processing Unit (TPU) for fruit detection. Cascaded PID controllers are used to control the joints with magnetic encoders for rotational feedback, and a time-of-flight sensor for prismatic movement feedback. Tomatoes are detected using an algorithm that finds regions of interest with the red colour present and sends these regions of interest to an image classifier that evaluates whether or not a tomato is present. With this, the system calculates the position of the tomato using stereo vision obtained from a monocular camera combined with the prismatic movement of the manipulator. As a result, the manipulator was able to position itself very close to the target in less than 3 seconds, where an end-effector could adjust its position for the picking.

  • 19
  • 329