Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Luís Freitas Rocha

2023

Bin Picking for Ship-Building Logistics Using Perception and Grasping Systems

Authors
Cordeiro, A; Souza, JP; Costa, CM; Filipe, V; Rocha, LF; Silva, MF;

Publication
ROBOTICS

Abstract
Bin picking is a challenging task involving many research domains within the perception and grasping fields, for which there are no perfect and reliable solutions available that are applicable to a wide range of unstructured and cluttered environments present in industrial factories and logistics centers. This paper contributes with research on the topic of object segmentation in cluttered scenarios, independent of previous object shape knowledge, for textured and textureless objects. In addition, it addresses the demand for extended datasets in deep learning tasks with realistic data. We propose a solution using a Mask R-CNN for 2D object segmentation, trained with real data acquired from a RGB-D sensor and synthetic data generated in Blender, combined with 3D point-cloud segmentation to extract a segmented point cloud belonging to a single object from the bin. Next, it is employed a re-configurable pipeline for 6-DoF object pose estimation, followed by a grasp planner to select a feasible grasp pose. The experimental results show that the object segmentation approach is efficient and accurate in cluttered scenarios with several occlusions. The neural network model was trained with both real and simulated data, enhancing the success rate from the previous classical segmentation, displaying an overall grasping success rate of 87.5%.

2023

Deep learning-based human action recognition to leverage context awareness in collaborative assembly

Authors
Moutinho, D; Rocha, LF; Costa, CM; Teixeira, LF; Veiga, G;

Publication
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING

Abstract
Human-Robot Collaboration is a critical component of Industry 4.0, contributing to a transition towards more flexible production systems that are quickly adjustable to changing production requirements. This paper aims to increase the natural collaboration level of a robotic engine assembly station by proposing a cognitive system powered by computer vision and deep learning to interpret implicit communication cues of the operator. The proposed system, which is based on a residual convolutional neural network with 34 layers and a long -short term memory recurrent neural network (ResNet-34 + LSTM), obtains assembly context through action recognition of the tasks performed by the operator. The assembly context was then integrated in a collaborative assembly plan capable of autonomously commanding the robot tasks. The proposed model showed a great performance, achieving an accuracy of 96.65% and a temporal mean intersection over union (mIoU) of 94.11% for the action recognition of the considered assembly. Moreover, a task-oriented evaluation showed that the proposed cognitive system was able to leverage the performed human action recognition to command the adequate robot actions with near-perfect accuracy. As such, the proposed system was considered as successful at increasing the natural collaboration level of the considered assembly station.

2023

Comparison of 3D Sensors for Automating Bolt-Tightening Operations in the Automotive Industry

Authors
Dias, J; Simoes, P; Soares, N; Costa, CM; Petry, MR; Veiga, G; Rocha, LF;

Publication
SENSORS

Abstract
Machine vision systems are widely used in assembly lines for providing sensing abilities to robots to allow them to handle dynamic environments. This paper presents a comparison of 3D sensors for evaluating which one is best suited for usage in a machine vision system for robotic fastening operations within an automotive assembly line. The perception system is necessary for taking into account the position uncertainty that arises from the vehicles being transported in an aerial conveyor. Three sensors with different working principles were compared, namely laser triangulation (SICK TriSpector1030), structured light with sequential stripe patterns (Photoneo PhoXi S) and structured light with infrared speckle pattern (Asus Xtion Pro Live). The accuracy of the sensors was measured by computing the root mean square error (RMSE) of the point cloud registrations between their scans and two types of reference point clouds, namely, CAD files and 3D sensor scans. Overall, the RMSE was lower when using sensor scans, with the SICK TriSpector1030 achieving the best results (0.25 mm +/- 0.03 mm), the Photoneo PhoXi S having the intermediate performance (0.49 mm +/- 0.14 mm) and the Asus Xtion Pro Live obtaining the higher RMSE (1.01 mm +/- 0.11 mm). Considering the use case requirements, the final machine vision system relied on the SICK TriSpector1030 sensor and was integrated with a collaborative robot, which was successfully deployed in an vehicle assembly line, achieving 94% success in 53,400 screwing operations.

2024

Inspection of Part Placement Within Containers Using Point Cloud Overlap Analysis for an Automotive Production Line

Authors
Costa C.M.; Dias J.; Nascimento R.; Rocha C.; Veiga G.; Sousa A.; Thomas U.; Rocha L.;

Publication
Lecture Notes in Mechanical Engineering

Abstract
Reliable operation of production lines without unscheduled disruptions is of paramount importance for ensuring the proper operation of automated working cells involving robotic systems. This article addresses the issue of preventing disruptions to an automotive production line that can arise from incorrect placement of aluminum car parts by a human operator in a feeding container with 4 indexing pins for each part. The detection of the misplaced parts is critical for avoiding collisions between the containers and a high pressure washing machine and also to avoid collisions between the parts and a robotic arm that is feeding parts to a air leakage inspection machine. The proposed inspection system relies on a 3D sensor for scanning the parts inside a container and then estimates the 6 DoF pose of the container followed by an analysis of the overlap percentage between each part reference point cloud and the 3D sensor data. When the overlap percentage is below a given threshold, the part is considered as misplaced and the operator is alerted to fix the part placement in the container. The deployment of the inspection system on an automotive production line for 22 weeks has shown promising results by avoiding 18 hours of disruptions, since it detected 407 containers having misplaced parts in 4524 inspections, from which 12 were false negatives, while no false positives were reported, which allowed the elimination of disruptions to the production line at the cost of manual reinspection of 0.27% of false negative containers by the operator.

2023

Quality Control of Casting Aluminum Parts: A Comparison of Deep Learning Models for Filings Detection

Authors
Nascimento, R; Ferreira, T; Rocha, C; Filipe, V; Silva, MF; Veiga, G; Rocha, L;

Publication
2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC

Abstract
Quality control inspection systems are crucial and a key factor in maintaining and ensuring the integrity of any product. The quality inspection task is a repetitive task, when performed by operators only, it can be slow and susceptible to failures due to the lack of attention and fatigue. This work focuses on the inspection of parts made of high-pressure diecast aluminum for components of the automotive industry. In the present case study, last year, 18240 parts needed to be reinspected, requiring approximately 96 hours, a time that could be spent on other tasks. This article performs a comparison of four deep learning models: Faster R-CNN, RetinaNet, YOLOv7, and YOLOv7-tiny, to find out which one is more suited to perform the quality inspection task of detecting metal filings on casting aluminum parts. As for this use-case the prototype must be highly intolerant to False Negatives, that is, the part being defective and passing undetected, Faster R-CNN was considered the bestperforming model based on a Recall value of 96.00%.

2011

Shop Floor Scheduling in a Mobile Robotic Environment

Authors
Pinto, AM; Rocha, LF; Moreira, AP; Costa, PG;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE

Abstract
Nowadays,it is far more common to see mobile robotics working in the industrial sphere due to the mandatory need to achieve a new level of productivity and increase profits by reducing production costs. Management scheduling and task scheduling are crucial for companies that incessantly seek to improve their processes, increase their efficiency, reduce their production time and capitalize on their infrastructure by increasing and improving production. However, when faced with the constant decrease in production cycles, management algorithms can no longer solely focus on the mere management of the resources available, they must attempt to optimize every interaction between them, to achieve maximinn efficiency for each production resource. In this paper we focus on the presentation of the new competition called Robot Factory, its environment and its main objectives, paying special attention to the scheduling algorithm developed for this specific case study. The findings from the simulation approach have allowed us to conclude that mobile robotic path planning and the scheduling of the associated tasks represent a complex problem that has a strong impact on the efficiency of the entire production process.

  • 8
  • 9