Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Concluí o Doutoramento em Engenharia Eletrotécnica e de Computadores na Universidade do Porto em 1994.

Sou atualmente Professor Associado do Departamento de Engenharia Eletrotécnica e de Computadores da Faculdade de Engenharia da Universidade do Porto (FEUP), onde leciono nas áreas das telecomunicações e do processamento de sinal.

Sou investigador do INESC TEC desde 1985 e tenho-me dedicado às áreas do processamento de imagem e vídeo e da visão por computador.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Luís Corte Real
  • Cargo

    Investigador Sénior
  • Desde

    01 junho 1985
004
Publicações

2023

From a Visual Scene to a Virtual Representation: A Cross-Domain Review

Autores
Pereira, A; Carvalho, P; Pereira, N; Viana, P; Corte-Real, L;

Publicação
IEEE ACCESS

Abstract
The widespread use of smartphones and other low-cost equipment as recording devices, the massive growth in bandwidth, and the ever-growing demand for new applications with enhanced capabilities, made visual data a must in several scenarios, including surveillance, sports, retail, entertainment, and intelligent vehicles. Despite significant advances in analyzing and extracting data from images and video, there is a lack of solutions able to analyze and semantically describe the information in the visual scene so that it can be efficiently used and repurposed. Scientific contributions have focused on individual aspects or addressing specific problems and application areas, and no cross-domain solution is available to implement a complete system that enables information passing between cross-cutting algorithms. This paper analyses the problem from an end-to-end perspective, i.e., from the visual scene analysis to the representation of information in a virtual environment, including how the extracted data can be described and stored. A simple processing pipeline is introduced to set up a structure for discussing challenges and opportunities in different steps of the entire process, allowing to identify current gaps in the literature. The work reviews various technologies specifically from the perspective of their applicability to an end-to-end pipeline for scene analysis and synthesis, along with an extensive analysis of datasets for relevant tasks.

2023

Synthesizing Human Activity for Data Generation

Autores
Romero, A; Carvalho, P; Corte-Real, L; Pereira, A;

Publicação
JOURNAL OF IMAGING

Abstract
The problem of gathering sufficiently representative data, such as those about human actions, shapes, and facial expressions, is costly and time-consuming and also requires training robust models. This has led to the creation of techniques such as transfer learning or data augmentation. However, these are often insufficient. To address this, we propose a semi-automated mechanism that allows the generation and editing of visual scenes with synthetic humans performing various actions, with features such as background modification and manual adjustments of the 3D avatars to allow users to create data with greater variability. We also propose an evaluation methodology for assessing the results obtained using our method, which is two-fold: (i) the usage of an action classifier on the output data resulting from the mechanism and (ii) the generation of masks of the avatars and the actors to compare them through segmentation. The avatars were robust to occlusion, and their actions were recognizable and accurate to their respective input actors. The results also showed that even though the action classifier concentrates on the pose and movement of the synthetic humans, it strongly depends on contextual information to precisely recognize the actions. Generating the avatars for complex activities also proved problematic for action recognition and the clean and precise formation of the masks.

2022

Boosting color similarity decisions using the CIEDE2000_PF Metric

Autores
Pereira, A; Carvalho, P; Corte Real, L;

Publicação
SIGNAL IMAGE AND VIDEO PROCESSING

Abstract
Color comparison is a key aspect in many areas of application, including industrial applications, and different metrics have been proposed. In many applications, this comparison is required to be closely related to human perception of color differences, thus adding complexity to the process. To tackle this, different approaches were proposed through the years, culminating in the CIEDE2000 formulation. In our previous work, we showed that simple color properties could be used to reduce the computational time of a color similarity decision process that employed this metric, which is recognized as having high computational complexity. In this paper, we show mathematically and experimentally that these findings can be adapted and extended to the recently proposed CIEDE2000 PF metric, which has been recommended by the CIE for industrial applications. Moreover, we propose new efficient models that not only achieve lower error rates, but also outperform the results obtained for the CIEDE2000 metric.

2020

Efficient CIEDE2000-Based Color Similarity Decision for Computer Vision

Autores
Pereira, A; Carvalho, P; Coelho, G; Corte Real, L;

Publicação
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY

Abstract
Color and color differences are critical aspects in many image processing and computer vision applications. A paradigmatic example is object segmentation, where color distances can greatly influence the performance of the algorithms. Metrics for color difference have been proposed in the literature, including the definition of standards such as CIEDE2000, which quantifies the change in visual perception of two given colors. This standard has been recommended for industrial computer vision applications, but the benefits of its application have been impaired by the complexity of the formula. This paper proposes a new strategy that improves the usability of the CIEDE2000 metric when a maximum acceptable distance can be imposed. We argue that, for applications where a maximum value, above which colors are considered to be different, can be established, then it is possible to reduce the amount of calculations of the metric, by preemptively analyzing the color features. This methodology encompasses the benefits of the metric while overcoming its computational limitations, thus broadening the range of applications of CIEDE2000 in both the computer vision algorithms and computational resource requirements.

2020

Texture collinearity foreground segmentation for night videos

Autores
Martins, I; Carvalho, P; Corte Real, L; Luis Alba Castro, JL;

Publicação
COMPUTER VISION AND IMAGE UNDERSTANDING

Abstract
One of the most difficult scenarios for unsupervised segmentation of moving objects is found in nighttime videos where the main challenges are the poor illumination conditions resulting in low-visibility of objects, very strong lights, surface-reflected light, a great variance of light intensity, sudden illumination changes, hard shadows, camouflaged objects, and noise. This paper proposes a novel method, coined COLBMOG (COLlinearity Boosted MOG), devised specifically for the foreground segmentation in nighttime videos, that shows the ability to overcome some of the limitations of state-of-the-art methods and still perform well in daytime scenarios. It is a texture-based classification method, using local texture modeling, complemented by a color-based classification method. The local texture at the pixel neighborhood is modeled as an..-dimensional vector. For a given pixel, the classification is based on the collinearity between this feature in the input frame and the reference background frame. For this purpose, a multimodal temporal model of the collinearity between texture vectors of background pixels is maintained. COLBMOG was objectively evaluated using the ChangeDetection.net (CDnet) 2014, Night Videos category, benchmark. COLBMOG ranks first among all the unsupervised methods. A detailed analysis of the results revealed the superior performance of the proposed method compared to the best performing state-of-the-art methods in this category, particularly evident in the presence of the most complex situations where all the algorithms tend to fail.

Teses
supervisionadas

2023

Synthesizing Human Activity for Data Generation

Autor
Ana Ysabella Rodrigues Romero

Instituição
UP-FEUP

2023

Video Based tracking for 3D Scene Analysis

Autor
Américo José Rodrigues Pereira

Instituição
UP-FEUP

2023

Synthesing Human Activity for Data Generation

Autor
Ana Ysabella Rodrigues Romero

Instituição
UP-FEUP

2022

Video Based tracking for 3D Scene Analysis

Autor
Américo José Rodrigues Pereira

Instituição
UP-FEUP

2022

Segmentation and Extraction of Human Characteristics for 3D Video Synthesis

Autor
André Filipe Cardoso Madureira

Instituição
UP-FEUP