Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Germano Veiga

2016

Recognition of Banknotes in Multiple Perspectives Using Selective Feature Matching and Shape Analysis

Autores
Costa, CM; Veiga, G; Sousa, A;

Publicação
2016 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2016)

Abstract
Reliable banknote recognition is critical for detecting counterfeit banknotes in ATMs and help visual impaired people. To solve this problem, it was implemented a computer vision system that can recognize multiple banknotes in different perspective views and scales, even when they are within cluttered environments in which the lighting conditions may vary considerably. The system is also able to recognize banknotes that are partially visible, folded, wrinkled or even worn by usage. To accomplish this task, the system relies on computer vision algorithms, such as image preprocessing, feature detection, description and matching. To improve the confidence of the banknote recognition the feature matching results are used to compute the contour of the banknotes using an homography that later on is validated using shape analysis algorithms. The system successfully recognized all Euro banknotes in 80 test images even when there were several overlapping banknotes in the same test image.

2017

Evaluation of Stanford NER for extraction of assembly information from instruction manuals

Autores
Costa, CM; Veiga, G; Sousa, A; Nunes, S;

Publicação
2017 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2017, Coimbra, Portugal, April 26-28, 2017

Abstract
Teaching industrial robots by demonstration can significantly decrease the repurposing costs of assembly lines worldwide. To achieve this goal, the robot needs to detect and track each component with high accuracy. To speedup the initial object recognition phase, the learning system can gather information from assembly manuals in order to identify which parts and tools are required for assembling a new product (avoiding exhaustive search in a large model database) and if possible also extract the assembly order and spatial relation between them. This paper presents a detailed analysis of the fine tuning of the Stanford Named Entity Recognizer for this text tagging task. Starting from the recommended configuration, it was performed 91 tests targeting the main features / parameters. Each test only changed a single parameter in relation to the recommend configuration, and its goal was to see the impact of the new configuration in the precision, recall and F1 metrics. This analysis allowed to fine tune the Stanford NER system, achieving a precision of 89.91%, recall of 83.51% and F1 of 84.69%. These results were retrieved with our new manually annotated dataset containing text with assembly operations for alternators, gearboxes and engines, which were written in a language discourse that ranges from professional to informal. The dataset can also be used to evaluate other information extraction and computer vision systems, since most assembly operations have pictures and diagrams showing the necessary product parts, their assembly order and relative spatial disposition. © 2017 IEEE.

2016

A Vertical and Cyber-Physical Integration of Cognitive Robots in Manufacturing

Autores
Krueger, V; Chazoule, A; Crosby, M; Lasnier, A; Pedersen, MR; Rovida, F; Nalpantidis, L; Petrick, R; Toscano, C; Veiga, G;

Publicação
PROCEEDINGS OF THE IEEE

Abstract
Cognitive robots, able to adapt their actions based on sensory information and the management of uncertainty, have begun to find their way into manufacturing settings. However, the full potential of these robots has not been fully exploited, largely due to the lack of vertical integration with existing IT infrastructures, such as the manufacturing execution system (MES), as part of a large-scale cyber-physical entity. This paper reports on considerations and findings from the research project STAMINA that is developing such a cognitive cyber-physical system and applying it to a concrete and well-known use case from the automotive industry. Our approach allows manufacturing tasks to be performed without human intervention, even if the available description of the environment-the world model-suffers from large uncertainties. Thus, the robot becomes an integral part of the MES, resulting in a highly flexible overall system.

2017

Beam for the steel fabrication industry robotic systems

Autores
Rocha, LF; Tavares, P; Malaca, P; Costa, C; Silva, J; Veiga, G;

Publicação
ISARC 2017 - Proceedings of the 34th International Symposium on Automation and Robotics in Construction

Abstract
In this paper, we present a comparison between the older DSTV file format and the newer version of the IFC standard, dedicating special attention of its impact in the robotization of welding and cutting processes in the steel structure fabrication industry. In the last decade, we have seen in this industry a significant increase in the request for automation. These new requirements are imposed by a market focused on the productivity enhancement through automation. Because of this paradigm change, the information structure and workflow provided by the DSTV format needed to be revised, namely the one related with the plan and management of steel fabrication processes. Therefore, with this work we enhance the importance of the increased digitalization of information that the newer version of the IFC standard provide, by showing how this information can be used to develop advanced robotic cells. More in detail, we will focus on the automatic generation of robot welding and cutting trajectories, and in the automatic part assembly planning during components fabrications. Besides these advantages, as this information is normally described having as base a perfect CAD model of the metallic structure, the resultant robot trajectories will have some dimensional error when fitted with the real physical component. Hence, we also present some automatic approaches based on a laser scanner and simple heuristics to overcome this limitations.

2016

The SPIDERobot: A Cable-Robot System for On-site Construction in Architecture

Autores
Sousa, JP; Palop, CG; Moreira, E; Pinto, AM; Lima, J; Costa, P; Costa, P; Veiga, G; Paulo Moreira, A;

Publicação
Robotic Fabrication in Architecture, Art and Design 2016

Abstract

2017

Pose Invariant Object Recognition Using a Bag of Words Approach

Autores
Costa, CM; Sousa, A; Veiga, G;

Publicação
ROBOT 2017: Third Iberian Robotics Conference - Volume 2, Seville, Spain, November 22-24, 2017.

Abstract
Pose invariant object detection and classification plays a critical role in robust image recognition systems and can be applied in a multitude of applications, ranging from simple monitoring to advanced tracking. This paper analyzes the usage of the Bag of Words model for recognizing objects in different scales, orientations and perspective views within cluttered environments. The recognition system relies on image analysis techniques, such as feature detection, description and clustering along with machine learning classifiers. For pinpointing the location of the target object, it is proposed a multiscale sliding window approach followed by a dynamic thresholding segmentation. The recognition system was tested with several configurations of feature detectors, descriptors and classifiers and achieved an accuracy of 87% when recognizing cars from an annotated dataset with 177 training images and 177 testing images. © Springer International Publishing AG 2018.

  • 4
  • 14