Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por José Boaventura

2021

Cloud-Based Framework for Robot Operation in Hospital Environments

Autores
Ferreira, NMF; Boaventura Cunha, J;

Publicação
CONTROLO 2020

Abstract
The robotics field is widely used in the industrial domain, but nowadays several other domains could also take advantage of it. This interdisciplinary branch of engineering requires the use of human interfaces, efficient communication systems, high storage and processing capabilities, among other issues, to perform complex tasks. This paper aims to propose a cloud-based framework platform for robot operation in a hospital environment, addressing some challenges, such as communications security and processing/storage features. The recent developments in the artificial intelligence field and cloud resources sharing are allowing the penetration of robots in unstructured environments. However, some new challenges and solutions need to be tested in real environments. Our main contribution is to decrease the time-consumption related to processing and storage costs, associated with the physical processing resources of the robots. Also, the proposed methods provide an increase of the processing variables that are not yet present in the physical resources, such as in the case of robots with limited processing time or storage capabilities. This paper presents a platform based on Cloud Computing with services to support processing, storage and analytics applied to hospital environments. The proposed platform enables to achieve a decrease in the time-consumption, especially when it is intended to retrieve information about all robot activities. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021.

2021

Control of Bio-Inspired Multi-robots Through Gestures Using Convolutional Neural Networks in Simulated Environment

Autores
Saraiva, AA; Santos, DBS; Ferreira, NMF; Boaventura-Cunha, J;

Publicação
CONTROLO 2020

Abstract
In this paper the comparison between three convolutional neural networks, used for the control of bio-inspired multi-robots in a simulated environment, is performed through manual gestures captured in real time by a webcam. The neural networks are: VGG19, GoogLeNet and Alexnet. For the training of networks and control of robots, six gestures were used, each gesture corresponding to one action, collective and individual actions were defined, the simulation contains four bio-inspired robots. In this work the performance of the networks in the classification of gestures to control robots is compared. They proved to be efficient in the classification and control of agents, with Alexnet achieving an accuracy of 98.33%, VGG19 98.06% e Googlelenet 96.94%.

2021

Robotic grasping: from wrench space heuristics to deep learning policies

Autores
de Souza, JPC; Rocha, LF; Oliveira, PM; Moreira, AP; Boaventura Cunha, J;

Publicação
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING

Abstract
The robotic grasping task persists as a modern industry problem that seeks autonomous, fast implementation, and efficient techniques. Domestic robots are also a reality demanding a delicate and accurate human-machine interaction, with precise robotic grasping and handling. From decades ago, with analytical heuristics, to recent days, with the new deep learning policies, grasping in complex scenarios is still the aim of several works' that propose distinctive approaches. In this context, this paper aims to cover recent methodologies' development and discuss them, showing state-of-the-art challenges and the gap to industrial applications deployment. Given the complexity of the related issue associated with the elaborated proposed methods, this paper formulates some fair and transparent definitions for results' assessment to provide researchers with a clear and standardised idea of the comparison between the new proposals.

2021

Low-Cost and Reduced-Size 3D-Cameras Metrological Evaluation Applied to Industrial Robotic Welding Operations

Autores
de Souza, JPC; Rocha, LF; Filipe, VM; Boaventura Cunha, J; Moreira, AP;

Publicação
2021 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)

Abstract
Nowadays, the robotic welding joint estimation, or weld seam tracking, has improved according to the new developments on computer vision technologies. Typically, the advances are focused on solving inaccurate procedures that advent from the manual positioning of the metal parts in welding workstations, especially in SMEs. Robotic arms, endowed with the appropriate perception capabilities, are a viable solution in this context, aiming for enhancing the production system agility whilst not increasing the production set-up time and costs. In this regard, this paper proposes a local perception pipeline to estimate joint welding points using small-sized/low-cost 3D cameras, following an eyes-on-hand approach. A metrological 3D camera comparison between Intel Realsene D435, D415, and ZED Mini is also discussed, proving that the proposed pipeline associated with standard commercial 3D cameras is viable for welding operations in an industrial environment.

2021

Smarter Robotic Sprayer System for Precision Agriculture

Autores
Baltazar, AR; dos Santos, FN; Moreira, AP; Valente, A; Cunha, JB;

Publicação
ELECTRONICS

Abstract
The automation of agricultural processes is expected to positively impact the environment by reducing waste and increasing food security, maximising resource use. Precision spraying is a method used to reduce the losses during pesticides application, reducing chemical residues in the soil. In this work, we developed a smart and novel electric sprayer that can be assembled on a robot. The sprayer has a crop perception system that calculates the leaf density based on a support vector machine (SVM) classifier using image histograms (local binary pattern (LBP), vegetation index, average, and hue). This density can then be used as a reference value to feed a controller that determines the air flow, the water rate, and the water density of the sprayer. This perception system was developed and tested with a created dataset available to the scientific community and represents a significant contribution. The results of the leaf density classifier show an accuracy score that varies between 80% and 85%. The conducted tests prove that the solution has the potential to increase the spraying accuracy and precision.

2021

Grape Bunch Detection at Different Growth Stages Using Deep Learning Quantized Models

Autores
Aguiar, AS; Magalhaes, SA; dos Santos, FN; Castro, L; Pinho, T; Valente, J; Martins, R; Boaventura Cunha, J;

Publicação
AGRONOMY-BASEL

Abstract
The agricultural sector plays a fundamental role in our society, where it is increasingly important to automate processes, which can generate beneficial impacts in the productivity and quality of products. Perception and computer vision approaches can be fundamental in the implementation of robotics in agriculture. In particular, deep learning can be used for image classification or object detection, endowing machines with the capability to perform operations in the agriculture context. In this work, deep learning was used for the detection of grape bunches in vineyards considering different growth stages: the early stage just after the bloom and the medium stage where the grape bunches present an intermediate development. Two state-of-the-art single-shot multibox models were trained, quantized, and deployed in a low-cost and low-power hardware device, a Tensor Processing Unit. The training input was a novel and publicly available dataset proposed in this work. This dataset contains 1929 images and respective annotations of grape bunches at two different growth stages, captured by different cameras in several illumination conditions. The models were benchmarked and characterized considering the variation of two different parameters: the confidence score and the intersection over union threshold. The results showed that the deployed models could detect grape bunches in images with a medium average precision up to 66.96%. Since this approach uses low resources, a low-cost and low-power hardware device that requires simplified models with 8 bit quantization, the obtained performance was satisfactory. Experiments also demonstrated that the models performed better in identifying grape bunches at the medium growth stage, in comparison with grape bunches present in the vineyard after the bloom, since the second class represents smaller grape bunches, with a color and texture more similar to the surrounding foliage, which complicates their detection.

  • 11
  • 24