Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Paulo José Costa

2014

New Marker for Real-Time Industrial Robot Programming by Motion Imitation

Authors
Ferreira, M; Costa, P; Rocha, L; Paulo Moreira, AP; Pires, N;

Publication
2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)

Abstract
This paper presents a new marker for robot programming by demonstration through motion imitation. The device is based on high intensity LEDs (light emission diodes) which are captured by a pair of industrial cameras. Using stereoscopy, the marker supplies 6-DoF (degrees of freedom) human wrist tracking with both position and orientation data. We propose a robust technique for camera and stereo calibration which maps camera coordinates directly into the desired robot frame, using a single LED. The calibration and tracking procedures are thoroughly described. The tests show that the marker presents a new robust, accurate and intuitive method for industrial robot programming. The system is able to perform in real-time and requires only a single pair of industrial cameras though more can be used for improved effectiveness and accuracy.

2013

Part Alignment Identification and Adaptive Pick-and-Place Operation for Flat Surfaces

Authors
da Costa, PM; Costa, P; Costa, P; Lima, J; Veiga, G;

Publication
ROBOTICS IN SMART MANUFACTURING

Abstract
Industrial laser cutting machines use a type of support base that sometimes causes the cut metal parts to tilt or fall, which hinders the robot from picking the parts after cutting. The objective of this work is to calculate the 3D orientation of these metal parts with relation to the main metal sheet to successfully perform the subsequent robotic pick-and-place operation. For the perception part the system relies on the low cost 3D sensing Microsoft Kinect, which is responsible for mapping the environment. The previously known part positions are mapped in the new environment and then a plane fitting algorithm is applied to obtain its 3D orientation. The implemented algorithm is able to detect if the piece has fallen or not. If not, the algorithm calculates the orientation of each piece separately. This information is later used for the robot manipulator to perform the pick-and-place operation with the correct tool orientation. This makes it possible to automate a manufacturing process that is entirely human dependent nowadays.

2013

Revisiting Lucas-Kanade and Horn-Schunck

Authors
Pinto, AMG; Moreira, AP; Costa, PG; Correia, MV;

Publication
JCEI - Journal of Computer Engineering and Informatics

Abstract

2013

Towards Extraction of Topological Maps from 2D and 3D Occupancy Grids

Authors
Santos, FN; Moreira, AP; Costa, PC;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2013

Abstract
Cooperation with humans is a requirement for the next generation of robots so it is necessary to model how robots can sense, know, share and acquire knowledge from human interaction. Instead of traditional SLAM (Simultaneous Localization and Mapping) methods, which do not interpret sensor information other than at the geometric level, these capabilities require an environment map representation similar to the human representation. Topological maps are one option to translate these geometric maps into a more abstract representation of the the world and to make the robot knowledge closer to the human perception. In this paper is presented a novel approach to translate 3D grid map into a topological map. This approach was optimized to obtain similar results to those obtained when the task is performed by a human. Also, a novel feature of this approach is the augmentation of topological map with features such as walls and doors.

2014

Unsupervised flow-based motion analysis for an autonomous moving system

Authors
Pinto, AM; Correia, MV; Paulo Moreira, AP; Costa, PG;

Publication
IMAGE AND VISION COMPUTING

Abstract
This article discusses the motion analysis based on dense optical flow fields and for a new generation of robotic moving systems with real-time constraints. It focuses on a surveillance scenario where an especially designed autonomous mobile robot uses a monocular camera for perceiving motion in the environment. The computational resources and the processing-time are two of the most critical aspects in robotics and therefore, two non-parametric techniques are proposed, namely, the Hybrid Hierarchical Optical Flow Segmentation and the Hybrid Density-Based Optical Flow Segmentation. Both methods are able to extract the moving objects by performing two consecutive operations: refining and collecting. During the refining phase, the flow field is decomposed in a set of clusters and based on descriptive motion properties. These properties are used in the collecting stage by a hierarchical or density-based scheme to merge the set of clusters that represent different motion models. In addition, a model selection method is introduced. This novel method analyzes the flow field and estimates the number of distinct moving objects using a Bayesian formulation. The research evaluates the performance achieved by the methods in a realistic surveillance situation. The experiments conducted proved that the proposed methods extract reliable motion information in real-time and without using specialized computers. Moreover, the resulting segmentation is less computationally demanding compared to other recent methods and therefore, they are suitable for most of the robotic or surveillance applications.

2014

A visual place recognition procedure with a Markov chain based filter

Authors
dos Santos, FN; Costa, P; Moreira, AP;

Publication
2014 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)

Abstract
Recognizing a place with a visual glance is the first capacity used by humans to understand where they are. Making this capacity available to robots will make it possible to increase the redundancy of the localization systems available in the robots, and improve semantic localization systems. However, to achieve this capacity it is necessary to build a robust visual place recognition procedure that could be used by an indoor robot. This paper presents an approach that from a single image estimates the robot location in the semantic space. This approach extracts from each camera image a global descriptor, which is the input of a Support Vector Machine classifier. In order to improve the classifier accuracy a Markov chain formalism was considered to constraint the probability flow according the place connections. This approach was tested using videos acquired from three robots in three different indoor scenarios - with and without the Markov chain filter. The use of Markov chain filter has shown a significantly improvement of the approach accuracy.

  • 5
  • 29