Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Germano Veiga
  • Cargo

    Investigador Sénior
  • Desde

    01 fevereiro 2012
031
Publicações

2024

Inspection of Part Placement Within Containers Using Point Cloud Overlap Analysis for an Automotive Production Line

Autores
Costa C.M.; Dias J.; Nascimento R.; Rocha C.; Veiga G.; Sousa A.; Thomas U.; Rocha L.;

Publicação
Lecture Notes in Mechanical Engineering

Abstract
Reliable operation of production lines without unscheduled disruptions is of paramount importance for ensuring the proper operation of automated working cells involving robotic systems. This article addresses the issue of preventing disruptions to an automotive production line that can arise from incorrect placement of aluminum car parts by a human operator in a feeding container with 4 indexing pins for each part. The detection of the misplaced parts is critical for avoiding collisions between the containers and a high pressure washing machine and also to avoid collisions between the parts and a robotic arm that is feeding parts to a air leakage inspection machine. The proposed inspection system relies on a 3D sensor for scanning the parts inside a container and then estimates the 6 DoF pose of the container followed by an analysis of the overlap percentage between each part reference point cloud and the 3D sensor data. When the overlap percentage is below a given threshold, the part is considered as misplaced and the operator is alerted to fix the part placement in the container. The deployment of the inspection system on an automotive production line for 22 weeks has shown promising results by avoiding 18 hours of disruptions, since it detected 407 containers having misplaced parts in 4524 inspections, from which 12 were false negatives, while no false positives were reported, which allowed the elimination of disruptions to the production line at the cost of manual reinspection of 0.27% of false negative containers by the operator.

2023

Deep learning-based human action recognition to leverage context awareness in collaborative assembly

Autores
Moutinho, D; Rocha, LF; Costa, CM; Teixeira, LF; Veiga, G;

Publicação
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING

Abstract
Human-Robot Collaboration is a critical component of Industry 4.0, contributing to a transition towards more flexible production systems that are quickly adjustable to changing production requirements. This paper aims to increase the natural collaboration level of a robotic engine assembly station by proposing a cognitive system powered by computer vision and deep learning to interpret implicit communication cues of the operator. The proposed system, which is based on a residual convolutional neural network with 34 layers and a long -short term memory recurrent neural network (ResNet-34 + LSTM), obtains assembly context through action recognition of the tasks performed by the operator. The assembly context was then integrated in a collaborative assembly plan capable of autonomously commanding the robot tasks. The proposed model showed a great performance, achieving an accuracy of 96.65% and a temporal mean intersection over union (mIoU) of 94.11% for the action recognition of the considered assembly. Moreover, a task-oriented evaluation showed that the proposed cognitive system was able to leverage the performed human action recognition to command the adequate robot actions with near-perfect accuracy. As such, the proposed system was considered as successful at increasing the natural collaboration level of the considered assembly station.

2023

Comparison of 3D Sensors for Automating Bolt-Tightening Operations in the Automotive Industry

Autores
Dias, J; Simoes, P; Soares, N; Costa, CM; Petry, MR; Veiga, G; Rocha, LF;

Publicação
SENSORS

Abstract
Machine vision systems are widely used in assembly lines for providing sensing abilities to robots to allow them to handle dynamic environments. This paper presents a comparison of 3D sensors for evaluating which one is best suited for usage in a machine vision system for robotic fastening operations within an automotive assembly line. The perception system is necessary for taking into account the position uncertainty that arises from the vehicles being transported in an aerial conveyor. Three sensors with different working principles were compared, namely laser triangulation (SICK TriSpector1030), structured light with sequential stripe patterns (Photoneo PhoXi S) and structured light with infrared speckle pattern (Asus Xtion Pro Live). The accuracy of the sensors was measured by computing the root mean square error (RMSE) of the point cloud registrations between their scans and two types of reference point clouds, namely, CAD files and 3D sensor scans. Overall, the RMSE was lower when using sensor scans, with the SICK TriSpector1030 achieving the best results (0.25 mm +/- 0.03 mm), the Photoneo PhoXi S having the intermediate performance (0.49 mm +/- 0.14 mm) and the Asus Xtion Pro Live obtaining the higher RMSE (1.01 mm +/- 0.11 mm). Considering the use case requirements, the final machine vision system relied on the SICK TriSpector1030 sensor and was integrated with a collaborative robot, which was successfully deployed in an vehicle assembly line, achieving 94% success in 53,400 screwing operations.

2022

A kinesthetic teaching approach for automating micropipetting repetitive tasks

Autores
Rocha, C; Dias, J; Moreira, AP; Veiga, G; Costa, P;

Publicação
INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY

Abstract
Nowadays, a laboratory operator in the areas of chemistry, biology or medicine spends considerable time performing micropipetting procedures, a common, monotonous and repetitive task which compromises the ergonomics of individuals, namely related to wrist musculoskeletal disorders. In this work, the design of a kinesthetic teaching approach for automating the micropipetting technique is presented, allowing to redirect the operator to other non-repetitive tasks, aiming to reduce the exposure to ergonomic risks. The proposed robotic solution has an innovative gripping system capable of supporting, actuating and regulating the volume of a manual micropipette. The system is able to configure the position of diverse laboratory materials, such as lab containers and plates, on the workbench through a collaborative robotic arm, providing flexibility to adapt to different procedures. A projected human-machine interface, which combines the display of information on the workbench with an infrared based interaction device was developed, providing a more intuitive interaction between the operator and the system during the configuration and operation phases. In contrast to the majority of the existing liquid handling systems, the proposed system allows the operator to place the materials freely on the workbench and the usage of different materials' variants, facilitating the implementation of the system in any laboratory. The attained performance and ease of use of the system were very encouraging since all the defined tasks in the conducted experiments were successfully performed by users with minimum training, highlighting its potential inclusion in the laboratory routine panorama.

2022

Using Simulation to Evaluate a Tube Perception Algorithm for Bin Picking

Autores
Leao, G; Costa, CM; Sousa, A; Reis, LP; Veiga, G;

Publicação
ROBOTICS

Abstract
Bin picking is a challenging problem that involves using a robotic manipulator to remove, one-by-one, a set of objects randomly stacked in a container. In order to provide ground truth data for evaluating heuristic or machine learning perception systems, this paper proposes using simulation to create bin picking environments in which a procedural generation method builds entangled tubes that can have curvatures throughout their length. The output of the simulation is an annotated point cloud, generated by a virtual 3D depth camera, in which the tubes are assigned with unique colors. A general metric based on micro-recall is proposed to compare the accuracy of point cloud annotations with the ground truth. The synthetic data is representative of a high quality 3D scanner, given that the performance of a tube modeling system when given 640 simulated point clouds was similar to the results achieved with real sensor data. Therefore, simulation is a promising technique for the automated evaluation of solutions for bin picking tasks.

Teses
supervisionadas

2022

Open Scalable Production System: An Industry 4.0 Framework for Cyber-Physical Systems

Autor
Rafael Lírio Arrais

Instituição
UP-FEUP

2021

Adaptação de um robô colaborativo para a função de suporte de aquisição de imagens numa clínica dentária

Autor
Bruno da Costa Rocha

Instituição
UP-FEUP

2021

Sistema de inspeção e controlo em vida série

Autor
José Luís da Cruz Almeida

Instituição
UP-FEUP

2021

Conception and simulation of a robotic cell based on the digital twin concept for industrial manufacturing

Autor
Tomás Alexandrino Oliveira Flores da Cunha

Instituição
UP-FEUP

2020

Open Scalable Production System: An Industry 4.0 Framework for Cyber-Physical Systems

Autor
Rafael Lírio Arrais

Instituição
UP-FEUP