Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRIIS

2020

Welcome Message

Authors
Lau N.; Silva M.F.; Reis L.P.; Cascalho J.;

Publication
2020 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2020

Abstract

2020

ROBIN: An open-source middleware for plug'n'produce of Cyber-Physical Systems

Authors
Arrais, R; Ribeiro, P; Domingos, H; Veiga, G;

Publication
INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS

Abstract
Motivated by the Fourth Industrial Revolution, there is an ever-increasing need to integrated Cyber-Physical Systems in industrial production environments. To address the demand for flexible robotics in contemporary industrial environments and the necessity to integrate robots and automation equipment in an efficient manner, an effective, bidirectional, reliable and structured data interchange mechanism is required. As an answer to these requirements, this article presents ROBIN, an open-source middleware for achieving interoperability between the Robot Operating System and CODESYS, a softPLC that can run on embedded devices and that supports a variety of fieldbuses and industrial network protocols. The referred middleware was successfully applied and tested in various industrial applications such as battery management systems, motion, robotic manipulator and safety hardware control, and horizontal integration between a mobile manipulator and a conveyor system.

2020

Autonomous Scene Exploration for Robotics: A Conditional Random View-Sampling and Evaluation Using a Voxel-Sorting Mechanism for Efficient Ray Casting

Authors
Santos, J; Oliveira, M; Arrais, R; Veiga, G;

Publication
SENSORS

Abstract
Carrying out the task of the exploration of a scene by an autonomous robot entails a set of complex skills, such as the ability to create and update a representation of the scene, the knowledge of the regions of the scene which are yet unexplored, the ability to estimate the most efficient point of view from the perspective of an explorer agent and, finally, the ability to physically move the system to the selected Next Best View (NBV). This paper proposes an autonomous exploration system that makes use of a dual OcTree representation to encode the regions in the scene which are occupied, free, and unknown. The NBV is estimated through a discrete approach that samples and evaluates a set of view hypotheses that are created by a conditioned random process which ensures that the views have some chance of adding novel information to the scene. The algorithm uses ray-casting defined according to the characteristics of the RGB-D sensor, and a mechanism that sorts the voxels to be tested in a way that considerably speeds up the assessment. The sampled view that is estimated to provide the largest amount of novel information is selected, and the system moves to that location, where a new exploration step begins. The exploration session is terminated when there are no more unknown regions in the scene or when those that exist cannot be observed by the system. The experimental setup consisted of a robotic manipulator with an RGB-D sensor assembled on its end-effector, all managed by a Robot Operating System (ROS) based architecture. The manipulator provides movement, while the sensor collects information about the scene. Experimental results span over three test scenarios designed to evaluate the performance of the proposed system. In particular, the exploration performance of the proposed system is compared against that of human subjects. Results show that the proposed approach is able to carry out the exploration of a scene, even when it starts from scratch, building up knowledge as the exploration progresses. Furthermore, in these experiments, the system was able to complete the exploration of the scene in less time when compared to human subjects.

2020

Supporting the Design, Commissioning and Supervision of Smart Factory Components through their Digital Twin

Authors
Martins, A; Costelha, H; Neves, C;

Publication
2020 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2020)

Abstract
In a context of greater complexity of Smart Factories, the commissioning time for automated systems needs to be shortened. The use of virtual commissioning tools is a good contribution to achieve this goal. Ideally, those tools should be part of a virtual engineering environment sharing same virtual model, the digital twin, through the complete lifecycle of the automated system, namely the project, simulation, implementation and execution/monitoring/supervision and, eventually decommissioning phases. Such vision includes a digital twin with a broader use, which is consistent with the real system and one that can be used after the early design and commissioning phases. Finding a complete set of tools able to comply with the above requirements can be extremely challenging. In this paper we explore the use of the ABB RobotStudio software combined with the OPC UA standard with this vision in mind. Methodologies were defined to integrate both new generation and legacy equipment, as well as robot controllers and guidelines for equipment development. A key result of this work is the development of a set of virtual engineering tools and methodologies based on OPC UA and implemented using RobotStudio in order to accomplish the complete lifecycle support of an automated system, from the project and simulation phases, to the monitoring and supervision phases, suitable for integration in Industry 4.0 factories. Results are described for a test scenario with different devices.

2020

Monocular Camera Calibration for Autonomous Driving - a comparative study

Authors
Martins, PF; Costelha, H; Bento, LC; Neves, C;

Publication
2020 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2020)

Abstract
Autonomous driving is currently a widely researched topic worldwide. With a large research effort being taken by industrial research units in the automotive sector, it is no longer exclusive to academic research labs. Essential to this ongoing effort towards level-5 vehicle autonomy, are the sensors used for tracking and detection, mainly lasers, radars and cameras. Most of the cameras for automotive application systems use wide-angle or fish-eye lens, which present high distortion levels. Cameras need to be calibrated for correct perception, particularly for capturing geometry features, or for distance-based calculations. This paper describes a case-study concerning monocular camera calibration for a small scale autonomous driving vehicle vision system. It describes the fundamentals on camera calibration and implementation, with results given for different lenses and distortion models. The aim of the paper is not only to provide a detailed and comprehensive review on the application of these calibration methods, but to serve also as a reference document for other researchers and developers starting to use monocular vision in their robotic applications.

2020

User-Experience with Haptic Feedback Technologies and Text Input in Interactive Multimedia Devices

Authors
Silva, B; Costelha, H; Bento, LC; Barata, M; Assuncao, P;

Publication
SENSORS

Abstract
Remote control devices are commonly used for interaction with multimedia equipment and applications (e.g., smart TVs, gaming, etc.). To improve conventional keypad-based technologies, haptic feedback and user input capabilities are being developed for enhancing the UX and providing advanced functionalities in remote control devices. Although the sensation provided by haptic feedback is similar to mechanical push buttons, the former offers much greater flexibility, due to the possibility of dynamically choosing different mechanical effects and associating different functions to each of them. However, selecting the best haptic feedback effects among the wide variety that is currently enabled by recent technologies, remains a challenge for design engineers aiming to optimise the UX. Rich interaction further requires text input capability, which greatly influences the UX. This work is a contribution towards UX evaluation of remote control devices with haptic feedback and text input. A user evaluation study of a wide variety of haptic feedback effects and text input methods is presented, considering different technologies and different number of actuators on a device. The user preferences, given by subjective evaluation scores, demonstrate that haptic feedback has undoubtedly a positive impact on the UX. Moreover, it is also shown that different levels of UX are obtained, according to the technological characteristics of the haptic actuators and how many of them are used on the device.

  • 100
  • 330