Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CRIIS

2014

Enhancing dynamic videos for surveillance and robotic applications: The robust bilateral and temporal filter

Autores
Pinto, AM; Costa, PG; Correia, MV; Moreira, AP;

Publicação
SIGNAL PROCESSING-IMAGE COMMUNICATION

Abstract
Over the last few decades, surveillance applications have been an extremely useful tool to prevent dangerous situations and to identify abnormal activities. Although, the majority of surveillance videos are often subjected to different noises that corrupt structured patterns and fine edges. This makes the image processing methods even more difficult, for instance, object detection, motion segmentation, tracking, identification and recognition of humans. This paper proposes a novel filtering technique named robust bilateral and temporal (RBLT), which resorts to a spatial and temporal evolution of sequences to conduct the filtering process while preserving relevant image information. A pixel value is estimated using a robust combination of spatial characteristics of the pixel's neighborhood and its own temporal evolution. Thus, robust statics concepts and temporal correlation between consecutive images are incorporated together which results in a reliable and configurable filter formulation that makes it possible to reconstruct highly dynamic and degraded image sequences. The filtering is evaluated using qualitative judgments and several assessment metrics, for different Gaussian and Salt Pepper noise conditions. Extensive experiments considering videos obtained by stationary and non-stationary cameras prove that the proposed technique achieves a good perceptual quality of filtering sequences corrupted with a strong noise component.

2014

New Marker for Real-Time Industrial Robot Programming by Motion Imitation

Autores
Ferreira, M; Costa, P; Rocha, L; Paulo Moreira, AP; Pires, N;

Publicação
2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)

Abstract
This paper presents a new marker for robot programming by demonstration through motion imitation. The device is based on high intensity LEDs (light emission diodes) which are captured by a pair of industrial cameras. Using stereoscopy, the marker supplies 6-DoF (degrees of freedom) human wrist tracking with both position and orientation data. We propose a robust technique for camera and stereo calibration which maps camera coordinates directly into the desired robot frame, using a single LED. The calibration and tracking procedures are thoroughly described. The tests show that the marker presents a new robust, accurate and intuitive method for industrial robot programming. The system is able to perform in real-time and requires only a single pair of industrial cameras though more can be used for improved effectiveness and accuracy.

2014

Unsupervised flow-based motion analysis for an autonomous moving system

Autores
Pinto, AM; Correia, MV; Paulo Moreira, AP; Costa, PG;

Publicação
IMAGE AND VISION COMPUTING

Abstract
This article discusses the motion analysis based on dense optical flow fields and for a new generation of robotic moving systems with real-time constraints. It focuses on a surveillance scenario where an especially designed autonomous mobile robot uses a monocular camera for perceiving motion in the environment. The computational resources and the processing-time are two of the most critical aspects in robotics and therefore, two non-parametric techniques are proposed, namely, the Hybrid Hierarchical Optical Flow Segmentation and the Hybrid Density-Based Optical Flow Segmentation. Both methods are able to extract the moving objects by performing two consecutive operations: refining and collecting. During the refining phase, the flow field is decomposed in a set of clusters and based on descriptive motion properties. These properties are used in the collecting stage by a hierarchical or density-based scheme to merge the set of clusters that represent different motion models. In addition, a model selection method is introduced. This novel method analyzes the flow field and estimates the number of distinct moving objects using a Bayesian formulation. The research evaluates the performance achieved by the methods in a realistic surveillance situation. The experiments conducted proved that the proposed methods extract reliable motion information in real-time and without using specialized computers. Moreover, the resulting segmentation is less computationally demanding compared to other recent methods and therefore, they are suitable for most of the robotic or surveillance applications.

2014

A visual place recognition procedure with a Markov chain based filter

Autores
dos Santos, FN; Costa, P; Moreira, AP;

Publicação
2014 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)

Abstract
Recognizing a place with a visual glance is the first capacity used by humans to understand where they are. Making this capacity available to robots will make it possible to increase the redundancy of the localization systems available in the robots, and improve semantic localization systems. However, to achieve this capacity it is necessary to build a robust visual place recognition procedure that could be used by an indoor robot. This paper presents an approach that from a single image estimates the robot location in the semantic space. This approach extracts from each camera image a global descriptor, which is the input of a Support Vector Machine classifier. In order to improve the classifier accuracy a Markov chain formalism was considered to constraint the probability flow according the place connections. This approach was tested using videos acquired from three robots in three different indoor scenarios - with and without the Markov chain filter. The use of Markov chain filter has shown a significantly improvement of the approach accuracy.

2014

An Architecture for Visual Motion Perception of a Surveillance-based Autonomous Robot

Autores
Pinto, AM; Costa, PG; Moreira, AP;

Publicação
2014 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)

Abstract
This research presents an innovative mobile robotic system designed for active surveillance operations. This mobile robot moves along a rail and is equipped with a monocular camera. Thus, it enhances the surveillance capability when compared to conventional systems (mainly composed by multiple static cameras). In addition, the paper proposes a technique for multi-object tracking called MTMP (Multi-Tracking of Motion Profiles). The MTMP resorts to a formulation based on the Kalman filter and tracks several moving objects using motion profiles. A motion profile is characterized by the dominant flow vector and is computed using the optical flow signature with removal of outliers. A similarity measure based on the Mahalanobis distance is used by the MTMP for associating the moving objects over frames. The experiments conducted in realistic environments have proved that the static perception mode of the proposed robot is able to detect and track multiple moving objects in a short period of time and without using specialized computers. In addition, the MTMP exhibits a good computational performance since it takes less than 5 milliseconds to compute. Therefore, results show that the estimation of motion profiles is suitable for analyzing motion on image sequences.

2014

Fully-Automated Strength, Agility and Endurance Tests Assessment: An Integrated Low Cost Approach Based on an Instrumented Chair

Autores
Goncalves, J; Batista, J; Costa, P;

Publicação
2014 IEEE EMERGING TECHNOLOGY AND FACTORY AUTOMATION (ETFA)

Abstract
In this paper it is described the prototyping of an instrumented chair that allows to fully-automate the "Timed Up and Go", the "30-Second Chair Stand" and the "Hand-Force "tests assessment. The presented functional chair prototype is a low cost approach that uses inexpensive sensors and the Arduino platform as the data acquisition board, with its software developed in LabVIEW. The "Timed up and go test" consists in measuring the time spent in the task execution of standing up from a chair, walk three meters with a maximum speed without running, turn a cone and going back to the initial position. The "30-Second Chair Stand" test consists in counting the number of completed chair stands in 30 seconds. It are agility, strength and endurance tests easy to setup and execute although they lack of repeatability, whenever the measures are taken manually, due to the rough errors that are introduced. The "Hand-Force" test consists in measuring the hand strength, the relevant data are the peak and average values of several tests. The referred data is important in order to evaluate hand rehabilitation treatment results.

  • 224
  • 331