Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CRAS

2013

Real-Time Visual Ground-Truth System for Indoor Robotic Applications

Authors
Dias, A; Almeida, J; Martins, A; Silva, E;

Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS, IBPRIA 2013

Abstract
The robotics community is concerned with the ability to infer and compare the results from researchers in areas such as vision perception and multi-robot cooperative behavior. To accomplish that task, this paper proposes a real-time indoor visual ground truth system capable of providing accuracy with at least more magnitude than the precision of the algorithm to be evaluated. A multi-camera architecture is proposed under the ROS (Robot Operating System) framework to estimate the 3D position of objects and the implementation and results were contextualized to the Robocup Middle Size League scenario.

2013

Thermographic and Visible Spectrum Camera Calibration for Marine Robotic Target Detection

Authors
Dias, A; Bras, C; Martins, A; Almeida, J; Silva, E;

Publication
2013 OCEANS - SAN DIEGO

Abstract
In the context of detection, location and tracking of human targets with combination of thermographic and visible cameras, this paper addresses the problem of geometric calibration of thermographic and visible spectrum cameras necessary for the stereo perception of targets in the robot frame. A method for precise geometric calibration of thermographic and visible cameras in the autonomous surface vehicle (ASV) ROAZ II is presented. The method combine the utilization of special patterns for intrinsic calibration of thermographic cameras, with the usage of a high-resolution 3D laser scanner for the extrinsic calibration, relating the cameras frames with the robot frame. Calibration process results are presented and analyzed.

2013

Object recognition using laser range finder and machine learning techniques

Authors
Pinto, AM; Rocha, LF; Paulo Moreira, AP;

Publication
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING

Abstract
In recent years, computer vision has been widely used on industrial environments, allowing robots to perform important tasks like quality control, inspection and recognition. Vision systems are typically used to determine the position and orientation of objects in the workstation, enabling them to be transported and assembled by a robotic cell (e.g. industrial manipulator). These systems commonly resort to CCD (Charge-Coupled Device) Cameras fixed and located in a particular work area or attached directly to the robotic arm (eye-in-hand vision system). Although it is a valid approach, the performance of these vision systems is directly influenced by the industrial environment lighting. Taking all these into consideration, a new approach is proposed for eye-on-hand systems, where the use of cameras will be replaced by the 2D Laser Range Finder (LRF). The LRF will be attached to a robotic manipulator, which executes a pre-defined path to produce grayscale images of the workstation. With this technique the environment lighting interference is minimized resulting in a more reliable and robust computer vision system. After the grayscale image is created, this work focuses on the recognition and classification of different objects using inherent features (based on the invariant moments of Hu) with the most well-known machine learning models: k-Nearest Neighbor (kNN), Neural Networks (NNs) and Support Vector Machines (SVMs). In order to achieve a good performance for each classification model, a wrapper method is used to select one good subset of features, as well as an assessment model technique called K-fold cross-validation to adjust the parameters of the classifiers. The performance of the models is also compared, achieving performances of 83.5% for kNN, 95.5% for the NN and 98.9% for the SVM (generalized accuracy). These high performances are related with the feature selection algorithm based on the simulated annealing heuristic, and the model assessment (k-fold cross-validation). It makes possible to identify the most important features in the recognition process, as well as the adjustment of the best parameters for the machine learning models, increasing the classification ratio of the work objects present in the robot's environment.

2013

Revisiting Lucas-Kanade and Horn-Schunck

Authors
Pinto, AMG; Moreira, AP; Costa, PG; Correia, MV;

Publication
JCEI - Journal of Computer Engineering and Informatics

Abstract

2013

Robot@Factory: Localization Method Based on Map-Matching and Particle Swarm Optimization

Authors
Pinto, AMG; Paulo Moreira, AP; Costa, PG;

Publication
PROCEEDINGS OF THE 2013 13TH INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS (ROBOTICA)

Abstract
This paper presents a novel localization method for small mobile robots. The proposed technique is especially designed for the Robot@Factory which is a new robotic competition presented in Lisbon 2011. The real-time localization technique resorts to low-cost infra-red sensors, a map-matching method and an Extended Kalman Filter (EKF) to create a pose tracking system that is well-behaved. The sensor information is continuously updated in time and space through the expected motion of the robot. Then, the information is incorporated into the map-matching optimization in order to increase the amount of sensor information that is available at each moment. In addition, a particle filter based on Particle Swarm Optimization (PSO) relocates the robot when the map-matching error is high. Meaning that the map-matching is unreliable and robot is lost. The experiments conducted in this paper prove the ability and accuracy of the presented technique to localize small mobile robots for this competition. Therefore, extensive results show that the proposed method have an interesting localization capability for robots equipped with a limited amount of sensors.

2013

EKF-based visual self-calibration tool for robots with rotating directional cameras

Authors
Ribeiro, J; Serra, R; Nunes, N; Silva, H; Almeida, J;

Publication
PROCEEDINGS OF THE 2013 13TH INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS (ROBOTICA)

Abstract
Autonomous mobile robots perception systems are complex multi-sensors systems. Information from different sensors, placed in different parts of the platforms, need to be related and fused into some representation of the world or robot state. For that, the knowledge of the relative pose (position and rotation) between sensors frames and the platform frame plays a critical role. The process to determine those is called extrinsic calibration. This paper addresses the development of automatic robot calibration tool for Middle Size League Robots with rotating directional cameras, such as the ISePorto team robots. The proposed solution consists on a robot navigating in a path, while acquiring visual information provided by a known target positioned in a global reference frame. This information is then combined with wheel odometry sensors, robot rotative axis encoders and gyro information within an Extend Kalman filter framework, that estimates all parameters required for the sensors angles and position determination related to the robot body frame. We evaluated our solution, by performing several trials and obtaining similar results to the previous used manual calibration procedure, but with a much less time consuming performance and also without being susceptible to human error.

  • 112
  • 167