2023
Autores
Conde, M; Rodríguez Sedano, J; Gonçalves, J; García Peñalvo, FJ;
Publicação
CEUR Workshop Proceedings
Abstract
In contemporary society, there is a growing demand for professionals with the essential skills required in the 21st century. The STEAM (Science, Technology, Engineering, Arts, and Mathematics) disciplines have emerged as pivotal in facilitating the acquisition of these skills. Indeed, these disciplines have exhibited their capacity to enhance workforce performance and fortify a nation's innovation potential, emphasizing the critical need to promote STEAM education among students and integrate it into existing educational curricula. Nonetheless, the inclusion of students with intellectual or developmental disabilities (IDD) in these disciplines presents formidable challenges. These challenges can be attributed to prevailing low expectations regarding the potential of disabled individuals to excel in STEAM fields, the inaccessibility of STEAM education curricula, and the limitations that educators face in fully supporting the integration of students with disabilities. In response to these challenges, we introduce the RoboSTEAMSEN project. The principal objective of the RoboSTEAMSEN project is to bolster educational processes by equipping teachers working with students with IDD with methodologies and tools that employ Robotics and Active Learning Methodologies to promote STEAM education. The project's overarching goals encompass comprehending the specific needs of disabled students and adapting robotics and active learning techniques to accommodate various disabilities, designing comprehensive training programs for teachers to enable them to individualize the learning experiences of students with IDD, establishing a community of practice supported by a technological ecosystem that serves as a central hub for educators and decision-makers to engage in discourse on how to achieve success in STEAM education for IDD students. The primary outcome of this project will be the enhancement of STEAM education for students with IDD. To achieve this objective, we will develop a taxonomy for the categorization of resources tailored to this demographic, institute a user model for personalized learning, generate guides, resources, and courses for teachers, formulate workshop models for the wider dissemination of project findings, and establish a technological ecosystem to facilitate a thriving community of practice dedicated to this important educational domain. © 2023 Copyright for this paper by its authors.
2023
Autores
Dias, J; Simoes, P; Soares, N; Costa, CM; Petry, MR; Veiga, G; Rocha, LF;
Publicação
SENSORS
Abstract
Machine vision systems are widely used in assembly lines for providing sensing abilities to robots to allow them to handle dynamic environments. This paper presents a comparison of 3D sensors for evaluating which one is best suited for usage in a machine vision system for robotic fastening operations within an automotive assembly line. The perception system is necessary for taking into account the position uncertainty that arises from the vehicles being transported in an aerial conveyor. Three sensors with different working principles were compared, namely laser triangulation (SICK TriSpector1030), structured light with sequential stripe patterns (Photoneo PhoXi S) and structured light with infrared speckle pattern (Asus Xtion Pro Live). The accuracy of the sensors was measured by computing the root mean square error (RMSE) of the point cloud registrations between their scans and two types of reference point clouds, namely, CAD files and 3D sensor scans. Overall, the RMSE was lower when using sensor scans, with the SICK TriSpector1030 achieving the best results (0.25 mm +/- 0.03 mm), the Photoneo PhoXi S having the intermediate performance (0.49 mm +/- 0.14 mm) and the Asus Xtion Pro Live obtaining the higher RMSE (1.01 mm +/- 0.11 mm). Considering the use case requirements, the final machine vision system relied on the SICK TriSpector1030 sensor and was integrated with a collaborative robot, which was successfully deployed in an vehicle assembly line, achieving 94% success in 53,400 screwing operations.
2023
Autores
Cordeiro, A; Rocha, LF; Costa, C; Silva, MF;
Publicação
ROBOT2022: FIFTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 2
Abstract
Bin picking based on deep learning techniques is a promising approach that can solve several analytical methods problems. These systems can provide accurate solutions to bin picking in cluttered environments, where the scenario is always changing. This article proposes a robust and accurate system for segmenting bin picking objects, employing an easy configuration procedure to adjust the framework according to a specific object. The framework is implemented in Robot Operating System (ROS) and is divided into a detection and segmentation system. The detection system employs Mask R-CNN instance neural network to identify several objects from two dimensions (2D) grayscale images. The segmentation system relies on the point cloud library (PCL), manipulating 3D point cloud data according to the detection results to select particular points of the original point cloud, generating a partial point cloud result. Furthermore, to complete the bin picking system a pose estimation approach based on matching algorithms is employed, such as Iterative Closest Point (ICP). The system was evaluated for two types of objects, knee tube, and triangular wall support, in cluttered environments. It displayed an average precision of 79% for both models, an average recall of 92%, and an average IOU of 89%. As exhibited throughout the article, this system demonstrates high accuracy in cluttered environments with several occlusions for different types of objects.
2023
Autores
Cordeiro, A; Souza, JP; Costa, CM; Filipe, V; Rocha, LF; Silva, MF;
Publicação
ROBOTICS
Abstract
Bin picking is a challenging task involving many research domains within the perception and grasping fields, for which there are no perfect and reliable solutions available that are applicable to a wide range of unstructured and cluttered environments present in industrial factories and logistics centers. This paper contributes with research on the topic of object segmentation in cluttered scenarios, independent of previous object shape knowledge, for textured and textureless objects. In addition, it addresses the demand for extended datasets in deep learning tasks with realistic data. We propose a solution using a Mask R-CNN for 2D object segmentation, trained with real data acquired from a RGB-D sensor and synthetic data generated in Blender, combined with 3D point-cloud segmentation to extract a segmented point cloud belonging to a single object from the bin. Next, it is employed a re-configurable pipeline for 6-DoF object pose estimation, followed by a grasp planner to select a feasible grasp pose. The experimental results show that the object segmentation approach is efficient and accurate in cluttered scenarios with several occlusions. The neural network model was trained with both real and simulated data, enhancing the success rate from the previous classical segmentation, displaying an overall grasping success rate of 87.5%.
2023
Autores
Moutinho, D; Rocha, LF; Costa, CM; Teixeira, LF; Veiga, G;
Publicação
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING
Abstract
Human-Robot Collaboration is a critical component of Industry 4.0, contributing to a transition towards more flexible production systems that are quickly adjustable to changing production requirements. This paper aims to increase the natural collaboration level of a robotic engine assembly station by proposing a cognitive system powered by computer vision and deep learning to interpret implicit communication cues of the operator. The proposed system, which is based on a residual convolutional neural network with 34 layers and a long -short term memory recurrent neural network (ResNet-34 + LSTM), obtains assembly context through action recognition of the tasks performed by the operator. The assembly context was then integrated in a collaborative assembly plan capable of autonomously commanding the robot tasks. The proposed model showed a great performance, achieving an accuracy of 96.65% and a temporal mean intersection over union (mIoU) of 94.11% for the action recognition of the considered assembly. Moreover, a task-oriented evaluation showed that the proposed cognitive system was able to leverage the performed human action recognition to command the adequate robot actions with near-perfect accuracy. As such, the proposed system was considered as successful at increasing the natural collaboration level of the considered assembly station.
2023
Autores
Nascimento, R; Ferreira, T; Rocha, C; Filipe, V; Silva, MF; Veiga, G; Rocha, L;
Publicação
2023 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC
Abstract
Quality control inspection systems are crucial and a key factor in maintaining and ensuring the integrity of any product. The quality inspection task is a repetitive task, when performed by operators only, it can be slow and susceptible to failures due to the lack of attention and fatigue. This work focuses on the inspection of parts made of high-pressure diecast aluminum for components of the automotive industry. In the present case study, last year, 18240 parts needed to be reinspected, requiring approximately 96 hours, a time that could be spent on other tasks. This article performs a comparison of four deep learning models: Faster R-CNN, RetinaNet, YOLOv7, and YOLOv7-tiny, to find out which one is more suited to perform the quality inspection task of detecting metal filings on casting aluminum parts. As for this use-case the prototype must be highly intolerant to False Negatives, that is, the part being defective and passing undetected, Faster R-CNN was considered the bestperforming model based on a Recall value of 96.00%.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.