2019
Autores
Reis, R; Diniz, F; Mizioka, L; Yamasaki, R; Lemos, G; Quintiães, M; Menezes, R; Caldas, N; Vita, R; Schultz, R; Arrais, R; Pereira, A;
Publicação
MATEC Web of Conferences
Abstract
2019
Autores
Arrais, R; Veiga, G; Ribeiro, TT; Oliveira, D; Fernandes, R; Conceição, AGS; Farias, PCMA;
Publicação
Progress in Artificial Intelligence, 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Portugal, September 3-6, 2019, Proceedings, Part II.
Abstract
To support the full adoption of Cyber-Physical Systems (CPS) in modern production lines, effective solutions need to be extended to the technological domains of robotics and industrial automation. This paper addresses the description, application and results of usage of the Open Scalable Production System (OSPS) and its underlying skill-based robot programming ideology to support machine tending of additive manufacturing operations by a mobile manipulator. © 2019, Springer Nature Switzerland AG.
2020
Autores
Arrais, R; Ribeiro, P; Domingos, H; Veiga, G;
Publicação
INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
Abstract
Motivated by the Fourth Industrial Revolution, there is an ever-increasing need to integrated Cyber-Physical Systems in industrial production environments. To address the demand for flexible robotics in contemporary industrial environments and the necessity to integrate robots and automation equipment in an efficient manner, an effective, bidirectional, reliable and structured data interchange mechanism is required. As an answer to these requirements, this article presents ROBIN, an open-source middleware for achieving interoperability between the Robot Operating System and CODESYS, a softPLC that can run on embedded devices and that supports a variety of fieldbuses and industrial network protocols. The referred middleware was successfully applied and tested in various industrial applications such as battery management systems, motion, robotic manipulator and safety hardware control, and horizontal integration between a mobile manipulator and a conveyor system.
2020
Autores
Santos, J; Oliveira, M; Arrais, R; Veiga, G;
Publicação
SENSORS
Abstract
Carrying out the task of the exploration of a scene by an autonomous robot entails a set of complex skills, such as the ability to create and update a representation of the scene, the knowledge of the regions of the scene which are yet unexplored, the ability to estimate the most efficient point of view from the perspective of an explorer agent and, finally, the ability to physically move the system to the selected Next Best View (NBV). This paper proposes an autonomous exploration system that makes use of a dual OcTree representation to encode the regions in the scene which are occupied, free, and unknown. The NBV is estimated through a discrete approach that samples and evaluates a set of view hypotheses that are created by a conditioned random process which ensures that the views have some chance of adding novel information to the scene. The algorithm uses ray-casting defined according to the characteristics of the RGB-D sensor, and a mechanism that sorts the voxels to be tested in a way that considerably speeds up the assessment. The sampled view that is estimated to provide the largest amount of novel information is selected, and the system moves to that location, where a new exploration step begins. The exploration session is terminated when there are no more unknown regions in the scene or when those that exist cannot be observed by the system. The experimental setup consisted of a robotic manipulator with an RGB-D sensor assembled on its end-effector, all managed by a Robot Operating System (ROS) based architecture. The manipulator provides movement, while the sensor collects information about the scene. Experimental results span over three test scenarios designed to evaluate the performance of the proposed system. In particular, the exploration performance of the proposed system is compared against that of human subjects. Results show that the proposed approach is able to carry out the exploration of a scene, even when it starts from scratch, building up knowledge as the exploration progresses. Furthermore, in these experiments, the system was able to complete the exploration of the scene in less time when compared to human subjects.
2021
Autores
de Souza, JPC; Costa, CM; Rocha, LF; Arrais, R; Moreira, AP; Pires, EJS; Boaventura Cunha, J;
Publicação
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING
Abstract
Several approaches with interesting results have been proposed over the years for robot grasp planning. However, the industry suffers from the lack of an intuitive and reliable system able to automatically estimate grasp poses while also allowing the integration of grasp information from the accumulated knowledge of the end user. In the presented paper it is proposed a non-object-agnostic grasping pipeline motivated by picking use cases from the aerospace industry. The planning system extends the functionality of the simulated annealing optimization algorithm for allowing its application within an industrial use case. Therefore, this paper addresses the first step of the design of a reconfigurable and modular grasping pipeline. The key idea is the creation of an intuitive and functional grasping framework for being used by factory floor operators according to the task demands. This software pipeline is capable of generating grasp solutions in an offline phase, and later on, in the robot operation phase, can choose the best grasp pose by taking into consideration a set of heuristics that try to achieve a successful grasp while also requiring the least effort for the robotic arm. The results are presented in a simulated and a real factory environment, relying on a mobile platform developed for intralogistic tasks. With this architecture, new state-of-art methodologies can be integrated in the future for growing the grasping pipeline and make it more robust and applicable to a wider range of use cases.
2021
Autores
Arrais, R; Costa, CM; Ribeiro, P; Rocha, LF; Silva, M; Veiga, G;
Publicação
INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY
Abstract
For remaining competitive in the current industrial manufacturing markets, coating companies need to implement flexible production systems for dealing with mass customization and mass production workflows. The introduction of robotic manipulators capable of mimicking with accuracy the motions executed by highly skilled technicians is an important factor in enabling coating companies to cope with high customization. However, there are some limitations associated with the usage of a fully automated system for coating applications, especially when considering customized products of large dimensions and complex geometry. This paper addresses the development of a collaborative coating cell to increase the flexibility and efficiency of coating processes. The robot trajectory is taught with an intuitive programming by demonstration system, in which an icosahedron marker with multicoloured LEDs is attached to the coating tool for tracking its trajectories using a stereoscopic vision system. For avoiding the construction of fixtures and allowing the operator to freely place products within the coating work cell, a modular 3D perception system was developed, relying on principal component analysis for performing the initial point cloud alignment and on the iterative closest point algorithm for 6 DoF pose estimation. Furthermore, to enable safe and intuitive human-robot collaboration, a non-intrusive zone monitoring safety system was employed to track the position of the operator in the cell.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.