2021
Authors
de Aguiar, ASP; de Oliveira, MAR; Pedrosa, EF; dos Santos, FBN;
Publication
EXPERT SYSTEMS WITH APPLICATIONS
Abstract
This paper proposes a camera-to-3D Light Detection And Ranging calibration framework through the optimization of atomic transformations. The system is able to simultaneously calibrate multiple cameras with Light Detection And Ranging sensors, solving the problem of Bundle. In comparison with the state-of-the-art, this work presents several novelties: the ability to simultaneously calibrate multiple cameras and LiDARs; the support for multiple sensor modalities; the calibration through the optimization of atomic transformations, without changing the topology of the input transformation tree; and the integration of the calibration framework within the Robot Operating System (ROS) framework. The software pipeline allows the user to interactively position the sensors for providing an initial estimate, to label and collect data, and visualize the calibration procedure. To test this framework, an agricultural robot with a stereo camera and a 3D Light Detection And Ranging sensor was used. Pairwise calibrations and a single calibration of the three sensors were tested and evaluated. Results show that the proposed approach produces accurate calibrations when compared to the state-of-the-art, and is robust to harsh conditions such as inaccurate initial guesses or small amount of data used in calibration. Experiments have shown that our optimization process can handle an angular error of approximately 20 degrees and a translation error of 0.5 meters, for each sensor. Moreover, the proposed approach is able to achieve state-of-the-art results even when calibrating the entire system simultaneously.
2021
Authors
Tinoco, V; Silva, MF; Santos, FN; Rocha, LF; Magalhaes, S; Santos, LC;
Publication
2021 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)
Abstract
The increase of the world population and a decrease in agricultural labour availability have motivated research robotics in the agricultural field. This paper aims to analyze the state of the art related to manipulators used in the agricultural robotics field. Two pruning and seven harvesting manipulators were reviewed and are analyzed. The pruning manipulators were used in two different scenarios: (i) grapevines and (ii) apple trees. These manipulators showed that a light-controlled environment could reduce visual errors and that prismatic joints on the manipulator are advantageous to obtain a higher reach. The harvesting manipulators were used for 5 different products: (i) strawberries, (ii) tomatoes, (iii) apples, (iv) sweet-peppers and (v) iceberg lettuce. The harvesting manipulators showed that a different kinematic configuration is required for different end-effectors, as some end-effectors only require horizontal movements and others require more degrees of freedom to reach and grasp the target. This work will support new developments of novel solutions related to agricultural robotic grasping and manipulation.
2021
Authors
Barroso, TG; Ribeiro, L; Gregorio, H; Santos, F; Martins, RC;
Publication
SENSORS AND ACTUATORS B-CHEMICAL
Abstract
Current chemometrics and artificial intelligence methods are unable to deal with complex multi-scale interference of blood constituents in visible shortwave near-infrared spectroscopy point-of-care technologies. The major difficulty is to access the rich information in the spectroscopy signal, unscrambling and interpreting spectral interference to provide analytical quality quantifications. We present a new self-learning artificial intelligence method for spectral processing based on the search of covariance modes with direct correspondence to the BeerLambert law. Dog and cat hemograms were analyzed by impedance flow cytometry and standard laboratory methods (erythrocytes counts, hemoglobin, and hematocrit). Spectral records were performed for the same samples. The methodology was benchmarked against state-of-the-art chemometrics: a multivariate linear model of hemoglobin bands, similarity, partial least squares, local partial least squares, and artificial neural networks. The new method outperforms the state-of-the-art, providing analytical quality quantifications according to desired veterinary pathology guidelines (total errors of 1.69% to 7.14%), whereas chemometric methods cannot. The method finds relevant samples and spectral information that hold the quantitative information for a particular interference mode, in contrast to the current methods that do not hold a relationship with the BeerLambert law. It allows the interpretation of interference bands used in quantification, providing the capacity to determine if the composition of an unknown sample is predictable. This research is especially relevant for improving current optical point-of-care technologies that are affected by spectral interference and moving towards micro-sampling and reagent-less technologies in healthcare and veterinary medicine diagnosis.
2021
Authors
Magalhaes, SA; Castro, L; Moreira, G; dos Santos, FN; Cunha, M; Dias, J; Moreira, AP;
Publication
SENSORS
Abstract
The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage (flower to the ripe tomato). The state-of-the-art for visual tomato detection focuses mainly on ripe tomato, which has a distinctive colour from the background. This paper contributes with an annotated visual dataset of green and reddish tomatoes. This kind of dataset is uncommon and not available for research purposes. This will enable further developments in edge artificial intelligence for in situ and in real-time visual tomato detection required for the development of harvesting robots. Considering this dataset, five deep learning models were selected, trained and benchmarked to detect green and reddish tomatoes grown in greenhouses. Considering our robotic platform specifications, only the Single-Shot MultiBox Detector (SSD) and YOLO architectures were considered. The results proved that the system can detect green and reddish tomatoes, even those occluded by leaves. SSD MobileNet v2 had the best performance when compared against SSD Inception v2, SSD ResNet 50, SSD ResNet 101 and YOLOv4 Tiny, reaching an F1-score of 66.15%, an mAP of 51.46% and an inference time of 16.44 ms with the NVIDIA Turing Architecture platform, an NVIDIA Tesla T4, with 12 GB. YOLOv4 Tiny also had impressive results, mainly concerning inferring times of about 5 ms.
2021
Authors
Santos, LC; Santos, A; Santos, FN; Valente, A;
Publication
ROBOTICS
Abstract
Software for robotic systems is becoming progressively more complex despite the existence of established software ecosystems like ROS, as the problems we delegate to robots become more and more challenging. Ensuring that the software works as intended is a crucial (but not trivial) task, although proper quality assurance processes are rarely seen in the open-source robotics community. This paper explains how we analyzed and improved a specialized path planner for steep-slope vineyards regarding its software dependability. The analysis revealed previously unknown bugs in the system, with a relatively low property specification effort. We argue that the benefits of similar quality assurance processes far outweigh the costs and should be more widespread in the robotics domain.
2021
Authors
da Silva, DQ; dos Santos, FN; Sousa, AJ; Filipe, V;
Publication
JOURNAL OF IMAGING
Abstract
Mobile robotics in forests is currently a hugely important topic due to the recurring appearance of forest wildfires. Thus, in-site management of forest inventory and biomass is required. To tackle this issue, this work presents a study on detection at the ground level of forest tree trunks in visible and thermal images using deep learning-based object detection methods. For this purpose, a forestry dataset composed of 2895 images was built and made publicly available. Using this dataset, five models were trained and benchmarked to detect the tree trunks. The selected models were SSD MobileNetV2, SSD Inception-v2, SSD ResNet50, SSDLite MobileDet and YOLOv4 Tiny. Promising results were obtained; for instance, YOLOv4 Tiny was the best model that achieved the highest AP (90%) and F1 score (89%). The inference time was also evaluated, for these models, on CPU and GPU. The results showed that YOLOv4 Tiny was the fastest detector running on GPU (8 ms). This work will enhance the development of vision perception systems for smarter forestry robots.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.