Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Filipe Neves Santos

2021

Tomato Detection Using Deep Learning for Robotics Application

Autores
Padilha, TC; Moreira, G; Magalhaes, SA; dos Santos, FN; Cunha, M; Oliveira, M;

Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE (EPIA 2021)

Abstract
The importance of agriculture and the production of fruits and vegetables has stood out mainly over the past few years, especially for the benefits for our health. In 2021, in the international year of fruit and vegetables, it is important to encourage innovation and evolution in this area, with the needs surrounding the different processes of the different cultures. This paper compares the performance between two datasets for robotics fruit harvesting using four deep learning object detection models: YOLOv4, SSD ResNet 50, SSD Inception v2, SSD MobileNet v2. This work aims to benchmark the Open Images Dataset v6 (OIDv6) against an acquired dataset inside a tomatoes greenhouse for tomato detection in agricultural environments, using a test dataset with acquired non augmented images. The results highlight the benefit of using self-acquired datasets for the detection of tomatoes because the state-of-the-art datasets, as OIDv6, lack some relevant characteristics of the fruits in the agricultural environment, as the shape and the color. Detections in greenhouses environments differ greatly from the data inside the OIDv6, which has fewer annotations per image and the tomato is generally riped (reddish). Standing out in the use of our tomato dataset, YOLOv4 stood out with a precision of 91%. The tomato dataset was augmented and is publicly available (See https://rdm.inesctec.pt/ and https://rdm.inesctec.pt/dataset/ii-2021-001).

2021

Smarter Robotic Sprayer System for Precision Agriculture

Autores
Baltazar, AR; dos Santos, FN; Moreira, AP; Valente, A; Cunha, JB;

Publicação
ELECTRONICS

Abstract
The automation of agricultural processes is expected to positively impact the environment by reducing waste and increasing food security, maximising resource use. Precision spraying is a method used to reduce the losses during pesticides application, reducing chemical residues in the soil. In this work, we developed a smart and novel electric sprayer that can be assembled on a robot. The sprayer has a crop perception system that calculates the leaf density based on a support vector machine (SVM) classifier using image histograms (local binary pattern (LBP), vegetation index, average, and hue). This density can then be used as a reference value to feed a controller that determines the air flow, the water rate, and the water density of the sprayer. This perception system was developed and tested with a created dataset available to the scientific community and represents a significant contribution. The results of the leaf density classifier show an accuracy score that varies between 80% and 85%. The conducted tests prove that the solution has the potential to increase the spraying accuracy and precision.

2021

Grape Bunch Detection at Different Growth Stages Using Deep Learning Quantized Models

Autores
Aguiar, AS; Magalhaes, SA; dos Santos, FN; Castro, L; Pinho, T; Valente, J; Martins, R; Boaventura Cunha, J;

Publicação
AGRONOMY-BASEL

Abstract
The agricultural sector plays a fundamental role in our society, where it is increasingly important to automate processes, which can generate beneficial impacts in the productivity and quality of products. Perception and computer vision approaches can be fundamental in the implementation of robotics in agriculture. In particular, deep learning can be used for image classification or object detection, endowing machines with the capability to perform operations in the agriculture context. In this work, deep learning was used for the detection of grape bunches in vineyards considering different growth stages: the early stage just after the bloom and the medium stage where the grape bunches present an intermediate development. Two state-of-the-art single-shot multibox models were trained, quantized, and deployed in a low-cost and low-power hardware device, a Tensor Processing Unit. The training input was a novel and publicly available dataset proposed in this work. This dataset contains 1929 images and respective annotations of grape bunches at two different growth stages, captured by different cameras in several illumination conditions. The models were benchmarked and characterized considering the variation of two different parameters: the confidence score and the intersection over union threshold. The results showed that the deployed models could detect grape bunches in images with a medium average precision up to 66.96%. Since this approach uses low resources, a low-cost and low-power hardware device that requires simplified models with 8 bit quantization, the obtained performance was satisfactory. Experiments also demonstrated that the models performed better in identifying grape bunches at the medium growth stage, in comparison with grape bunches present in the vineyard after the bloom, since the second class represents smaller grape bunches, with a color and texture more similar to the surrounding foliage, which complicates their detection.

2021

Autonomous Robot Visual-Only Guidance in Agriculture Using Vanishing Point Estimation

Autores
Sarmento, J; Aguiar, AS; dos Santos, FN; Sousa, AJ;

Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE (EPIA 2021)

Abstract
Autonomous navigation in agriculture is very challenging as it usually takes place outdoors where there is rough terrain, uncontrolled natural lighting, constantly changing organic scenarios and sometimes the absence of a Global Navigation Satellite System (GNSS). In this work, a single camera and a Google coral dev Board Edge Tensor Processing Unit (TPU) setup is proposed to navigate among a woody crop, more specifically a vineyard. The guidance is provided by estimating the vanishing point and observing its position with respect to the central frame, and correcting the steering angle accordingly. The vanishing point is estimated by object detection using Deep Learning (DL) based Neural Networks (NN) to obtain the position of the trunks in the image. The NN's were trained using Transfer Learning (TL), which requires a smaller dataset than conventional training methods. For this purpose, a dataset with 4221 images was created considering image collection, annotation and augmentation procedures. Results show that our framework can detect the vanishing point with an average of the absolute error of 0.52. and can be considered for autonomous steering.

2021

PixelCropRobot, a cartesian multitask platform for microfarms automation

Autores
Terra F.; Rodrigues L.; Magalhaes S.; Santos F.; Moura P.; Cunha M.;

Publicação
2021 International Symposium of Asian Control Association on Intelligent Robotics and Industrial Automation, IRIA 2021

Abstract
The world society needs to produce more food with the highest quality standards to feed the world population with the same level of nutrition. Microfarms and local food production enable growing vegetables near the population and reducing the operational logistics costs related to post-harvest food handling. However, it isn't economical viable neither efficient to have one person devoted to these microfarms task. To overcome this issue, we propose an open-source robotic solution capable of performing multitasks in small polyculture farms. This robot is equipped with optical sensors, manipulators and other mechatronic technology to monitor and process both biotic and abiotic agronomic data. This information supports the consequent activation of manipulators that perform several agricultural tasks: crop and weed detection, sowing and watering. The development of the robot meets low-cost requirements so that it can be a putative commercial solution. This solution is designed to be relevant as a test platform to support the assembly of new sensors and further develop new cognitive solutions, to raise awareness on topics related to Precision Agriculture. We are looking for a rational use of resources and several other aspects of an evolved, economically efficient and ecologically sustainable agriculture.

2021

Robot navigation in vineyards based on the visual vanish point concept

Autores
Sarmento, J; Aguiar, AS; Santos, FND; Sousa, AJ;

Publicação
2021 International Symposium of Asian Control Association on Intelligent Robotics and Industrial Automation, IRIA 2021

Abstract
Autonomous navigation in agriculture is very challenging as it usually takes place outdoors where there is rough terrain, uncontrolled natural lighting, constantly changing organic scenarios and sometimes the absence of Global Navigation Satellite System (GNSS) signal. In this work, a monocular visual system is proposed to estimate angular orientation and navigate between woody crops, more specifically a vineyard, using a Proportional Integrative Derivative (PID)-based controller. The guidance is provided by combining two ways to find the center of the vineyard: First, by estimating the vanishing point and second, by averaging the position of the two closest base trunk detections. Then, by the monocular angle perception, the angular error is determined. For obtaining the trunk position in the image, object detection using Deep Learning (DL) based Neural Networks (NN) is used. To evaluate the proposed controller, a visual vineyard simulation is created using Gazebo. The proposed joint controller is able to travel along a simulated straight vineyard with an RMS error of 1.17 cm. Moreover, a simulated curved vineyard modeled after the Douro region is tested in this work, where the robot was able to steer with an RMS error of 7.28 cm. © 2021 IEEE.

  • 10
  • 21