2015
Authors
Costa, CM; Sobreira, HM; Sousa, AJ; Veiga, G;
Publication
Cutting Edge Research in Technologies
Abstract
2016
Authors
Duarte, M; dos Santos, FN; Sousa, A; Morais, R;
Publication
ROBOT 2015: SECOND IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1
Abstract
Crop monitoring and harvesting by ground robots in steep slope vineyards is an intrinsically complex challenge, due to two main reasons: harsh conditions of the terrain and reduced time availability and unstable localization accuracy of the Global Positioning System (GPS). In this paper the use of agricultural wireless sensors as artificial landmarks for robot localization is explored. The Received Signal Strength Indication (RSSI), of Bluetooth (BT) based sensors/technology, has been characterized for distance estimation. Based on this characterization, a mapping procedure based on Histogram Mapping concept was evaluated. The results allow us to conclude that agricultural wireless sensors can be used to support the robot localization procedures in critical moments (GPS blockage) and to create redundant localization information.
2016
Authors
Costa, CM; Veiga, G; Sousa, A;
Publication
2016 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2016)
Abstract
Reliable banknote recognition is critical for detecting counterfeit banknotes in ATMs and help visual impaired people. To solve this problem, it was implemented a computer vision system that can recognize multiple banknotes in different perspective views and scales, even when they are within cluttered environments in which the lighting conditions may vary considerably. The system is also able to recognize banknotes that are partially visible, folded, wrinkled or even worn by usage. To accomplish this task, the system relies on computer vision algorithms, such as image preprocessing, feature detection, description and matching. To improve the confidence of the banknote recognition the feature matching results are used to compute the contour of the banknotes using an homography that later on is validated using shape analysis algorithms. The system successfully recognized all Euro banknotes in 80 test images even when there were several overlapping banknotes in the same test image.
2017
Authors
Costa, CM; Veiga, G; Sousa, A; Nunes, S;
Publication
2017 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2017, Coimbra, Portugal, April 26-28, 2017
Abstract
Teaching industrial robots by demonstration can significantly decrease the repurposing costs of assembly lines worldwide. To achieve this goal, the robot needs to detect and track each component with high accuracy. To speedup the initial object recognition phase, the learning system can gather information from assembly manuals in order to identify which parts and tools are required for assembling a new product (avoiding exhaustive search in a large model database) and if possible also extract the assembly order and spatial relation between them. This paper presents a detailed analysis of the fine tuning of the Stanford Named Entity Recognizer for this text tagging task. Starting from the recommended configuration, it was performed 91 tests targeting the main features / parameters. Each test only changed a single parameter in relation to the recommend configuration, and its goal was to see the impact of the new configuration in the precision, recall and F1 metrics. This analysis allowed to fine tune the Stanford NER system, achieving a precision of 89.91%, recall of 83.51% and F1 of 84.69%. These results were retrieved with our new manually annotated dataset containing text with assembly operations for alternators, gearboxes and engines, which were written in a language discourse that ranges from professional to informal. The dataset can also be used to evaluate other information extraction and computer vision systems, since most assembly operations have pictures and diagrams showing the necessary product parts, their assembly order and relative spatial disposition. © 2017 IEEE.
2017
Authors
Costa, CM; Sousa, A; Veiga, G;
Publication
ROBOT 2017: Third Iberian Robotics Conference - Volume 2, Seville, Spain, November 22-24, 2017.
Abstract
Pose invariant object detection and classification plays a critical role in robust image recognition systems and can be applied in a multitude of applications, ranging from simple monitoring to advanced tracking. This paper analyzes the usage of the Bag of Words model for recognizing objects in different scales, orientations and perspective views within cluttered environments. The recognition system relies on image analysis techniques, such as feature detection, description and clustering along with machine learning classifiers. For pinpointing the location of the target object, it is proposed a multiscale sliding window approach followed by a dynamic thresholding segmentation. The recognition system was tested with several configurations of feature detectors, descriptors and classifiers and achieved an accuracy of 87% when recognizing cars from an annotated dataset with 177 training images and 177 testing images. © Springer International Publishing AG 2018.
2016
Authors
Costa, V; Cunha, T; Oliveira, M; Sobreira, H; Sousa, A;
Publication
ROBOT 2015: SECOND IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1
Abstract
In this article, a course that explores the potential of learning ROS using a collaborative game world is presented. The competitive mindset and its origins are explored, and an analysis of a collaborative game is presented in detail, showing how some key design features lead participants to overcome the challenges proposed through cooperation and collaboration. The data analysis is supported through observation of two different game simulations: the first, where all competitors were playing solo, and the second, where the players were divided in groups of three. Lastly, the authors reflect on the potentials that this course provides as a tool for learning ROS.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.