2018
Authors
de Sousa, P; Esteves, T; Campos, D; Duarte, F; Santos, J; Leao, J; Xavier, J; de Matos, L; Camarneiro, M; Penas, M; Miranda, M; Silva, R; Neves, AJR; Teixeira, L;
Publication
VIPIMAGE 2017
Abstract
Gesture recognition is very important for Human-Robot Interfaces. In this paper, we present a novel depth based method for gesture recognition to improve the interaction of a service robot autonomous shopping cart, mostly used by reduced mobility people. In the proposed solution, the identification of the user is already implemented by the software present on the robot where a bounding box focusing on the user is extracted. Based on the analysis of the depth histogram, the distance from the user to the robot is calculated and the user is segmented using from the background. Then, a region growing algorithm is applied to delete all other objects in the image. We apply again a threshold technique to the original image, to obtain all the objects in front of the user. Intercepting the threshold based segmentation result with the region growing resulting image, we obtain candidate objects to be arms of the user. By applying a labelling algorithm to obtain each object individually, a Principal Component Analysis is computed to each one to obtain its center and orientation. Using that information, we intercept the silhouette of the arm with a line obtaining the upper point of the interception which indicates the hand position. A Kalman filter is then applied to track the hand and based on state machines to describe gestures (Start, Stop, Pause) we perform gesture recognition. We tested the proposed approach in a real case scenario with different users and we obtained an accuracy around 89,7%.
2019
Authors
Silva, R; Leite, P; Campos, D; Pinto, AM;
Publication
2019 19TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2019)
Abstract
Shipping transportation mode needs to be even more efficient, profitable and secure as more than 80% of the world's trade is done by sea. Autonomous ships will provide the possibility to eliminate the likelihood of human error, reduce unnecessary crew costs and increase the efficiency of the cargo spaces. Although a significant work is being made, and new algorithms are arising, they are still a mirage and still have some problems regarding safety, autonomy and reliability. This paper proposes an online obstacle avoidance algorithm for Autonomous Surfaces Vehicles (ASVs) introducing the reachability with the protective zone concepts. This method estimates a collision-free velocity based on inner and outer constraints such as, current velocity, direction, maximum speed and turning radius of the vehicle, position and dimensions of the surround obstacles as well as a movement prediction in a close future. A non-restrictive estimative for the speed and direction of the ASV is calculated by mapping a conflict zone, determined by the course of the vehicle and the distance to obstacles that is used to avoid imminent dangerous situations. A set of simulations demonstrates the ability of this method to safely circumvent obstacles in several scenarios with different weather conditions.
2020
Authors
Leite, PN; Silva, RJ; Campos, DF; Pinto, AM;
Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
A dense and accurate disparity map is relevant for a large number of applications, ranging from autonomous driving to robotic grasping. Recent developments in machine learning techniques enable us to bypass sensor limitations, such as low resolution, by using deep regression models to complete otherwise sparse representations of the 3D space. This article proposes two main approaches that use a single RGB image and sparse depth information gathered from a variety of sensors/techniques (stereo, LiDAR and Light Stripe Ranging (LSR)): a Convolutional Neural Network (CNN) and a cascade architecture, that aims to improve the results of the first. Ablation studies were conducted to infer the impact of these depth cues on the performance of each model. The models trained with LiDAR sparse information are the most reliable, achieving an average Root Mean Squared Error (RMSE) of 11.8 cm on our own Inhouse dataset; while the LSR proved to be too sparse of an input to compute accurate predictions on its own. © Springer Nature Switzerland AG 2020.
2018
Authors
Santos, J; Campos, D; Duarte, F; Pereira, F; Domingues, I; Santos, J; Leão, J; Xavier, J; Matos, Ld; Camarneiro, M; Penas, M; Miranda, M; Morais, R; Silva, R; Esteves, T;
Publication
Service Robots
Abstract
2017
Authors
Neves, A; Campos, D; Duarte, F; Domingues, I; Santos, J; Leao, J; Xavier, J; de Matos, L; Camarneiro, M; Penas, M; Miranda, M; Silva, R; Esteves, T;
Publication
VEHITS: PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON VEHICLE TECHNOLOGY AND INTELLIGENT TRANSPORT SYSTEMS
Abstract
This paper concerns a robot to assist people in retail shopping scenarios, called the wGO. The robot's behaviour is based in a vision-guided approach based on user-following. The wGO brings numerous advantages and a higher level of comfort, since the user does not need to worry about controlling the shopping cart. In addition, this paper introduces the wGOs functionalities and requirements to enable the robot to successfully perform personal assistance while the user is shopping in a safe way. A user satisfaction survey is also presented. Based on the highly encouraging results, some conclusions and guidelines towards the future full deployment of the wGO in commercial environments are drawn. Copyright
2020
Authors
Campos, DF; Matos, A; Pinto, AM;
Publication
2020 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2020)
Abstract
The offshore wind power industry is an emerging and exponentially growing sector, which calls to a necessity for a cyclical monitoring and inspection to ensure the safety and efficiency of the wind farm facilities. Thus, the multiple domains of the environment must be reconstructed, namely the emersed (aerial) and immersed (underwater) domains, to depict as much as possible the offshore structures from the wind turbines to the cable arrays. This work proposes the use of an Autonomous Surface Vehicle (ASV) to map both environments simultaneously producing a multi-domain map through the fusion of navigational sensors, GPS and IMU, to localize the vehicle and aid the registration process for the perception sensors, 3D Lidar and Multibeam echosounder sonar. The performed experiments demonstrate the ability of the multi-domain mapping architecture to provide an accurate reconstruction of both scenarios into a single representation using the odometry system as the initial seed to further improve the map with data filtering and registration processes.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.