2023
Authors
Coelho, L; Glotsos, D; Reis, S;
Publication
BIOENGINEERING-BASEL
Abstract
2022
Authors
Sousa, RB; Rocha, C; Mendonca, HS; Moreira, AP; Silva, MF;
Publication
IEEE ACCESS
Abstract
The technological market is increasingly evolving as evidenced by the innovative and streamlined manufacturing processes. Printed Circuit Boards (PCB) are widely employed in the electronics fabrication industry, resorting to the Gerber open standard format to transfer the manufacturing data. The Gerber format describes not only metadata related to the manufacturing process but also the PCB image. To be able to map the electronic circuit pattern to be printed, a parser to convert Gerber files into a bitmap image is required. The current literature as well as available Gerber viewers and libraries showed limitations mainly in the Gerber format support, focusing only on a subset of commands. In this work, the development of a recursive descent approach for parsing Gerber files is described, outlining its interpretation and the renderization of 2D bitmap images. All the defined commands in the specification based on Gerber X2 generation were successfully rendered, unlike the tested commercial parsers used in the experiments. Moreover, the obtained results were comparable to those parsers regarding the commands they can execute as well as the ground-truth, emphasizing the accuracy of the proposed approach. Its top-down and recursive architecture allows easy integration with other software regardless of the platform, highlighting its potential inclusion and integration in the production of electronic circuits.
2022
Authors
Aguiar, AS; dos Santos, FN; Sobreira, H; Boaventura Cunha, J; Sousa, AJ;
Publication
FRONTIERS IN ROBOTICS AND AI
Abstract
Developing ground robots for agriculture is a demanding task. Robots should be capable of performing tasks like spraying, harvesting, or monitoring. However, the absence of structure in the agricultural scenes challenges the implementation of localization and mapping algorithms. Thus, the research and development of localization techniques are essential to boost agricultural robotics. To address this issue, we propose an algorithm called VineSLAM suitable for localization and mapping in agriculture. This approach uses both point- and semiplane-features extracted from 3D LiDAR data to map the environment and localize the robot using a novel Particle Filter that considers both feature modalities. The numeric stability of the algorithm was tested using simulated data. The proposed methodology proved to be suitable to localize a robot using only three orthogonal semiplanes. Moreover, the entire VineSLAM pipeline was compared against a state-of-the-art approach considering three real-world experiments in a woody-crop vineyard. Results show that our approach can localize the robot with precision even in long and symmetric vineyard corridors outperforming the state-of-the-art algorithm in this context.
2022
Authors
Leao, G; Costa, CM; Sousa, A; Reis, LP; Veiga, G;
Publication
ROBOTICS
Abstract
Bin picking is a challenging problem that involves using a robotic manipulator to remove, one-by-one, a set of objects randomly stacked in a container. In order to provide ground truth data for evaluating heuristic or machine learning perception systems, this paper proposes using simulation to create bin picking environments in which a procedural generation method builds entangled tubes that can have curvatures throughout their length. The output of the simulation is an annotated point cloud, generated by a virtual 3D depth camera, in which the tubes are assigned with unique colors. A general metric based on micro-recall is proposed to compare the accuracy of point cloud annotations with the ground truth. The synthetic data is representative of a high quality 3D scanner, given that the performance of a tube modeling system when given 640 simulated point clouds was similar to the results achieved with real sensor data. Therefore, simulation is a promising technique for the automated evaluation of solutions for bin picking tasks.
2022
Authors
Coutinho, RM; Sousa, A; Santos, F; Cunha, M;
Publication
APPLIED SCIENCES-BASEL
Abstract
Soil Moisture (SM) is one of the most critical factors for a crop's growth, yield, and quality. Although Ground-Penetrating RADAR (GPR) is commonly used in satelite observation to analyze soil moisture, it is not cost-effective for agricultural applications. Automotive RADAR uses the concept of Frequency-Modulated Continuous Wave (FMCW) and is more competitive in terms of price. This paper evaluates the viability of using a cost-effective RADAR as a substitute for GPR for soil moisture content estimation. The research consisted of four experiments, and the results show that the RADAR's output signal and the soil moisture sensor SEN0193 have a high correlation with values as high as 0.93 when the SM is below 15%. Such results show that the tested sensor (and its cost-effective working principle) are able to determine soil water content (with certain limitations) in a non-intrusive, proximal sensing manner.
2022
Authors
da Silva, DQ; dos Santos, FN; Filipe, V; Sousa, AJ; Oliveira, PM;
Publication
ROBOTICS
Abstract
Object identification, such as tree trunk detection, is fundamental for forest robotics. Intelligent vision systems are of paramount importance in order to improve robotic perception, thus enhancing the autonomy of forest robots. To that purpose, this paper presents three contributions: an open dataset of 5325 annotated forest images; a tree trunk detection Edge AI benchmark between 13 deep learning models evaluated on four edge-devices (CPU, TPU, GPU and VPU); and a tree trunk mapping experiment using an OAK-D as a sensing device. The results showed that YOLOR was the most reliable trunk detector, achieving a maximum F1 score around 90% while maintaining high scores for different confidence levels; in terms of inference time, YOLOv4 Tiny was the fastest model, attaining 1.93 ms on the GPU. YOLOv7 Tiny presented the best trade-off between detection accuracy and speed, with average inference times under 4 ms on the GPU considering different input resolutions and at the same time achieving an F1 score similar to YOLOR. This work will enable the development of advanced artificial vision systems for robotics in forestry monitoring operations.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.