2021
Autores
Baltazar, AR; Petry, MR; Silva, MF; Moreira, AP;
Publicação
SN APPLIED SCIENCES
Abstract
The transport of patients from the inpatient service to the operating room is a recurrent task in a hospital routine. This task is repetitive, non-ergonomic, time consuming, and requires the labor of patient transporters. In this paper is presented a system, named Connected Driverless Wheelchair, that can receive transportation requests directly from the hospital information management system, pick up patients at their beds, navigate autonomously through different floors, avoid obstacles, communicate with elevators, and drop patients off at the designated operating room. As a result, a prototype capable of transporting patients autonomously in hospital environments was obtained. Although it was impossible to test the final developed system at the hospital as planned, due to the COVID-19 pandemic, the extensive tests conducted at the robotics laboratory facilities, and our previous experience in integrating mobile robots in hospitals, allowed to conclude that it is perfectly prepared for this integration to be carried out.The achieved results are relevant since this is a system that may be applied to support these types of tasks in the future, making the transport of patients more efficient (both from a cost and time perspective), without unpredictable delays and, in some cases, safer.
2021
Autores
Sousa, RB; Petry, MR; Moreira, AP;
Publicação
Lecture Notes in Electrical Engineering
Abstract
Data acquisition is a critical task for localisation and perception of mobile robots. It is necessary to compute the relative pose between onboard sensors to process the data in a common frame. Thus, extrinsic calibration computes the sensor’s relative pose improving data consistency between them. This paper performs a literature review on extrinsic sensor calibration methods prioritising the most recent ones. The sensors types considered were laser scanners, cameras and IMUs. It was found methods for robot–laser, laser–laser, laser–camera, robot–camera, camera–camera, camera–IMU, IMU–IMU and laser–IMU calibration. The analysed methods allow the full calibration of a sensory system composed of lasers, cameras and IMUs. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021.
2021
Autores
Soares, I; Sousa, RB; Petry, M; Moreira, AP;
Publicação
MULTIMODAL TECHNOLOGIES AND INTERACTION
Abstract
Augmented and virtual reality have been experiencing rapid growth in recent years, but there is still no deep knowledge regarding their capabilities and in what fields they could be explored. In that sense, this paper presents a study on the accuracy and repeatability of Microsoft's HoloLens 2 (augmented reality device) and HTC Vive (virtual reality device) using an OptiTrack system as ground truth. For the HoloLens 2, the method used was hand tracking, whereas, in HTC Vive, the object tracked was the system's hand controller. A series of tests in different scenarios and situations were performed to explore what could influence the measures. The HTC Vive obtained results in the millimeter range, while the HoloLens 2 revealed not very accurate measurements (around 2 cm). Although the difference can seem to be considerable, the fact that HoloLens 2 was tracking the user's hand and not the system's controller made a huge impact. The results are considered a significant step for the ongoing project of developing a human-robot interface by demonstrating an industrial robot using extended reality, which shows great potential to succeed based on our data.
2021
Autores
Soares, I; Petry, M; Moreira, AP;
Publicação
SENSORS
Abstract
The world is living the fourth industrial revolution, marked by the increasing intelligence and automation of manufacturing systems. Nevertheless, there are types of tasks that are too complex or too expensive to be fully automated, it would be more efficient if the machines were able to work with the human, not only by sharing the same workspace but also as useful collaborators. A possible solution to that problem is on human-robot interaction systems, understanding the applications where they can be helpful to implement and what are the challenges they face. This work proposes the development of an industrial prototype of a human-machine interaction system through Augmented Reality, in which the objective is to enable an industrial operator without any programming experience to program a robot. The system itself is divided into two different parts: the tracking system, which records the operator's hand movement, and the translator system, which writes the program to be sent to the robot that will execute the task. To demonstrate the concept, the user drew geometric figures, and the robot was able to replicate the operator's path recorded.
2022
Autores
Masson, JEN; Petry, MR; Coutinho, DF; Honorio, LD;
Publicação
IMAGE AND VISION COMPUTING
Abstract
The Multi-View Stereo (MVS) is a key process in the photogrammetry workflow. It is responsible for taking the camera's views and finding the maximum number of matches between the images yielding a dense point cloud of the observed scene. Since this process is based on the matching between images it greatly depends on the abil-ity of features matching throughout different images. To improve the matching performance several researchers have proposed the use of Convolutional Neural Networks (CNNs) to solve the MVS problem. Despite the progress in the MVS problem with the usage of CNNs, the Video RAM (VRAM) consumption within these approaches is usually far greater than classical methods, that rely more on RAM, which is cheaper to expand than VRAM. This work then follows the progress made in CasMVSNet in the reduction of GPU memory usage, and further study the changes in the feature extraction process. The Average Group-wise Correlation is used in the cost vol-ume generation, to reduce the number of channels in the cost volume, yielding a reduction in GPU memory usage without noticeable penalties in the result. The deformable convolutions are applied in the feature extraction net -work to augment the spatial sampling locations with learning offsets, without additional supervision, to further improve the network's ability to model transformations. The impact of these changes is measured using quanti-tative and qualitative tests using the DTU and the Tanks and Temples datasets. The modifications reduced the GPU memory usage by 32% and improved the completeness by 9% with a penalty of 6.6% in accuracy on the DTU dataset.(c) 2021 Published by Elsevier B.V.
2021
Autores
Pinto, R; Gonçalves, G; Aschenbrenner, D; Rusak, Z; Petry, M; Silva, M;
Publicação
SSRN Electronic Journal
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.