Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Luís Freitas Rocha

2019

Online inspection system based on machine learning techniques: real case study of fabric textures classification for the automotive industry

Autores
Malaca, P; Rocha, LF; Gomes, D; Silva, J; Veiga, G;

Publicação
JOURNAL OF INTELLIGENT MANUFACTURING

Abstract
This paper focus on the classification, in real-time and under uncontrolled lighting, of fabric textures for the automotive industry. Many industrial processes have spatial constraints that limit the effective control of illumination of their vision based systems, hindering their effectiveness. The ability to overcome these problems using robust classification methods with suitable pre-processing techniques and choice of characteristics will increase the efficiency of this type of solutions with obvious production gains and thus economical. For this purpose, this paper studied and analyzed various pre-processing techniques, and selected the most appropriate fabric characteristics for the considered industrial case scenario. The methodology followed was based on the comparison of two different machine learning classifiers, ANN and SVM, using a large set of samples with a large variability of lightning conditions to faithfully simulate the industrial environment. The obtained solution shows the sensibility of ANN over SVM considering the number of features and the size of the training set, showing the better effectiveness and robustness of the last. The characteristics vector uses histogram equalization, Laws filter and Sobel filter, and multi-scale analysis. By using a correlation based method was possible to reduce the number of features used, achieving a better balanced between processing time and classification ratio.

2019

Map-Matching Algorithms for Robot Self-Localization: A Comparison Between Perfect Match, Iterative Closest Point and Normal Distributions Transform

Autores
Sobreira, H; Costa, CM; Sousa, I; Rocha, L; Lima, J; Farias, PCMA; Costa, P; Paulo Moreira, AP;

Publicação
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS

Abstract
The self-localization of mobile robots in the environment is one of the most fundamental problems in the robotics navigation field. It is a complex and challenging problem due to the high requirements of autonomous mobile vehicles, particularly with regard to the algorithms accuracy, robustness and computational efficiency. In this paper, we present a comparison of three of the most used map-matching algorithms applied in localization based on natural landmarks: our implementation of the Perfect Match (PM) and the Point Cloud Library (PCL) implementation of the Iterative Closest Point (ICP) and the Normal Distribution Transform (NDT). For the purpose of this comparison we have considered a set of representative metrics, such as pose estimation accuracy, computational efficiency, convergence speed, maximum admissible initialization error and robustness to the presence of outliers in the robots sensors data. The test results were retrieved using our ROS natural landmark public dataset, containing several tests with simulated and real sensor data. The performance and robustness of the Perfect Match is highlighted throughout this article and is of paramount importance for real-time embedded systems with limited computing power that require accurate pose estimation and fast reaction times for high speed navigation. Moreover, we added to PCL a new algorithm for performing correspondence estimation using lookup tables that was inspired by the PM approach to solve this problem. This new method for computing the closest map point to a given sensor reading proved to be 40 to 60 times faster than the existing k-d tree approach in PCL and allowed the Iterative Closest Point algorithm to perform point cloud registration 5 to 9 times faster.

2019

AdaptPack Studio: Automatic Offline Robot Programming Framework for Factory Environments

Autores
Castro, A; Souza, JP; Rocha, L; Silva, MF;

Publicação
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)

Abstract
The brisk and dynamic environment that factories are facing, both as an internal and an external level, requires a collection of handy tools to solve emerging issues in the industry 4.0 context. Part of the common challenges that appear are related to the increasing demand for high adaptability in the organizations' production lines. Mechanical processes are becoming faster and more adjustable to the production diversity in the Fast Moving Consumer Goods (FMCG). Concerning the previous characteristics, future factories can only remain competitive and profitable if they have the ability to quickly adapt all their production resources in response to inconstant market demands. Having previous concerns in focus, this paper presents a fast and adaptative framework for automated cells modeling, simulation and offline robot programming, focused on palletizing operations. Established as an add-on for the Visual Components (VC) 3D manufacturing simulation software, the proposed application allows performing fast layout modeling and automatic offline generation of robot programs. Furthermore, A* based algorithms are used for generating collision-free trajectories, discretized both in the robot joints space and in the Cartesian space. The software evaluation was tested inside the VC simulation world and in the real-world scenario. Results have shown to be concise and accurate, with minor displacement inaccuracies due to differences between the virtual model and the real world.

2019

Collaborative Welding System using BIM for Robotic Reprogramming and Spatial Augmented Reality

Autores
Tavares, P; Costa, CM; Rocha, L; Malaca, P; Costa, P; Moreira, AP; Sousa, A; Veiga, G;

Publicação
AUTOMATION IN CONSTRUCTION

Abstract
The optimization of the information flow from the initial design and through the several production stages plays a critical role in ensuring product quality while also reducing the manufacturing costs. As such, in this article we present a cooperative welding cell for structural steel fabrication that is capable of leveraging the Building Information Modeling (BIM) standards to automatically orchestrate the necessary tasks to be allocated to a human operator and a welding robot moving on a linear track. We propose a spatial augmented reality system that projects alignment information into the environment for helping the operator tack weld the beam attachments that will be later on seam welded by the industrial robot. This way we ensure maximum flexibility during the beam assembly stage while also improving the overall productivity and product quality since the operator no longer needs to rely on error prone measurement procedures and he receives his tasks through an immersive interface, relieving him from the burden of analyzing complex manufacturing design specifications. Moreover, no expert robotics knowledge is required to operate our welding cell because all the necessary information is extracted from the Industry Foundation Classes (IFC), namely the CAD models and welding sections, allowing our 3D beam perception systems to correct placement errors or beam bending, which coupled with our motion planning and welding pose optimization system ensures that the robot performs its tasks without collisions and as efficiently as possible while maximizing the welding quality.

2019

Converting Robot Offline Programs to Native Code Using the AdaptPack Studio Translators

Autores
Souza, JP; Castro, A; Rocha, L; Relvas, P; Silva, MF;

Publicação
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)

Abstract
The increase in productivity is a demand for modern industries that need to be competitive in the actual business scenario. To face these challenges, companies are increasingly using robotic systems for end-of-line production tasks, such as wrapping and palletizing, as a mean to enhance the production line efficiency and products traceability, allowing human operators to be moved to more added value operations. Despite this increasing use of robotic systems, these equipments still present some inconveniences regarding the programming procedure, as the time required for its execution does not meet the current industrial needs. To face this drawback, offline robot programming methods are gaining great visibility, as their flexibility and programming speed allows companies to face the need of successive changes in the production line set-up. However, even with a great number of robots and simulators that are available in market, the efforts to support several robot brands in one software did not reach the needs of engineers. Therefore, this paper proposes a translation library named AdaptPack Studio Translator, which is capable to export proprietary codes for the ABB, Fanuc, Kuka, and Yaskawa robot brands, after their offline programming has been performed in the Visual Components software. The results presented in this paper are evaluated in simulated and real scenarios.

2019

Modeling of video projectors in OpenGL for implementing a spatial augmented reality teaching system for assembly operations

Autores
Costa, CM; Veiga, G; Sousa, A; Rocha, L; Augusto Sousa, AA; Rodrigues, R; Thomas, U;

Publicação
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)

Abstract
Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator.

  • 4
  • 9