2018
Autores
Cardoso, Â; Sousa, A; Ferreira, H;
Publicação
ICERI2018 Proceedings
Abstract
2018
Autores
Cardoso, Â; Sousa, A; Ferreira, H;
Publicação
ICERI2018 Proceedings
Abstract
2019
Autores
Tavares, P; Costa, CM; Rocha, L; Malaca, P; Costa, P; Moreira, AP; Sousa, A; Veiga, G;
Publicação
AUTOMATION IN CONSTRUCTION
Abstract
The optimization of the information flow from the initial design and through the several production stages plays a critical role in ensuring product quality while also reducing the manufacturing costs. As such, in this article we present a cooperative welding cell for structural steel fabrication that is capable of leveraging the Building Information Modeling (BIM) standards to automatically orchestrate the necessary tasks to be allocated to a human operator and a welding robot moving on a linear track. We propose a spatial augmented reality system that projects alignment information into the environment for helping the operator tack weld the beam attachments that will be later on seam welded by the industrial robot. This way we ensure maximum flexibility during the beam assembly stage while also improving the overall productivity and product quality since the operator no longer needs to rely on error prone measurement procedures and he receives his tasks through an immersive interface, relieving him from the burden of analyzing complex manufacturing design specifications. Moreover, no expert robotics knowledge is required to operate our welding cell because all the necessary information is extracted from the Industry Foundation Classes (IFC), namely the CAD models and welding sections, allowing our 3D beam perception systems to correct placement errors or beam bending, which coupled with our motion planning and welding pose optimization system ensures that the robot performs its tasks without collisions and as efficiently as possible while maximizing the welding quality.
2019
Autores
Costa, CM; Veiga, G; Sousa, A; Rocha, L; Augusto Sousa, AA; Rodrigues, R; Thomas, U;
Publicação
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)
Abstract
Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator.
2019
Autores
Aguiar, A; Sousa, A; dos Santos, FN; Oliveira, M;
Publicação
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)
Abstract
Developing ground robots for crop monitoring and harvesting in steep slope vineyards is a complex challenge due to two main reasons: harsh condition of the terrain and unstable localization accuracy obtained with Global Navigation Satellite System. In this context, a reliable localization system requires an accurate and redundant information to Global Navigation Satellite System and wheel odometry based system. To pursue this goal we benchmark 3 well known Visual Odometry methods with 2 datasets. Two of these are feature-based Visual Odometry algorithms: Libviso2 and SVO 2.0. The third is an appearance-based Visual Odometry algorithm called DSO. In monocular Visual Odometry, two main problems appear: pure rotations and scale estimation. In this paper, we focus on the first issue. To do so, we propose a Kalman Filter to fuse a single gyroscope with the output pose of monocular Visual Odometry, while estimating gyroscope's bias continuously. In this approach we propose a non-linear noise variation that ensures that bias estimation is not affected by Visual Odometry resultant rotations. We compare and discuss the three unchanged methods and the three methods with the proposed additional Kalman Filter. For tests, two public datasets are used: the Kitti dataset and another built in-house. Results show that our additional Kalman Filter highly improves Visual Odometry performance in rotation movements.
2019
Autores
Abreu, M; Lau, N; Sousa, A; Reis, LP;
Publicação
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)
Abstract
Reinforcement learning algorithms are now more appealing than ever. Recent approaches bring power and tuning simplicity to the everyday work machine. The possibilities are endless, and the idea of automating learning without domain knowledge is quite tempting for many researchers. However, in competitive environments such as the RoboCup 3D Soccer Simulation League, there is a lot to be done regarding humanlike behaviors. Current teams use many mechanical movements to perform basic skills, such as running and dribbling the ball. This paper aims to use the PPO algorithm to optimize those skills, achieving natural gaits without sacrificing performance. We use Simspark to simulate a NAO humanoid robot, using visual and body sensors to control its actuators. Based on our results, we propose an indirect control approach and detailed parameter setups to obtain natural running and dribbling behaviors. The obtained performance is in some cases comparable or better than the top RoboCup teams. However, some skills are not ready to be applied in competitive environments yet, due to instability. This work contributes towards the improvement of RoboCup and some related technical challenges.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.