2016
Autores
Arrais, R; Oliveira, M; Toscano, C; Veiga, G;
Publicação
IMAGE ANALYSIS AND RECOGNITION (ICIAR 2016)
Abstract
While bottom-up approaches to object recognition are simple to design and implement, they do not yield the same performance as top-down approaches. On the other hand, it is not trivial to obtain a moderate number of plausible hypotheses to be efficiently verified by top-down approaches. To address these shortcomings, we propose a hybrid top-down bottom-up approach to object recognition where a bottom-up procedure that generates a set of hypothesis based on data is combined with a top-down process for evaluating those hypotheses. We use the recognition of rectangular cuboid shaped objects from 3D point cloud data as a benchmark problem for our research. Results obtained using this approach demonstrate promising recognition performances.
2016
Autores
Oliveira, M; Santos, V; Sappa, AD; Dias, P; Moreira, AP;
Publicação
ROBOTICS AND AUTONOMOUS SYSTEMS
Abstract
Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.
2016
Autores
Oliveira, M; Santos, V; Sappa, AD; Dias, P; Moreira, AP;
Publicação
ROBOTICS AND AUTONOMOUS SYSTEMS
Abstract
When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.
2015
Autores
Oliveira M.; Santos V.; Sappa A.D.;
Publicação
Information Fusion
Abstract
Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints.
2016
Autores
Costa, V; Cunha, T; Oliveira, M; Sobreira, H; Sousa, A;
Publicação
ROBOT 2015: SECOND IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1
Abstract
In this article, a course that explores the potential of learning ROS using a collaborative game world is presented. The competitive mindset and its origins are explored, and an analysis of a collaborative game is presented in detail, showing how some key design features lead participants to overcome the challenges proposed through cooperation and collaboration. The data analysis is supported through observation of two different game simulations: the first, where all competitors were playing solo, and the second, where the players were divided in groups of three. Lastly, the authors reflect on the potentials that this course provides as a tool for learning ROS.
2015
Autores
Oliveira, M; Santos, V; Sappa, AD;
Publicação
INFORMATION FUSION
Abstract
Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.