Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Maria Teresa Andrade é Professora Auxiliar na FEUP, no DEEC. Obteve a licenciatura em 1986, o grau de mestre em 1992 e o doutoramento em 2008 em Eng. Electrotécnica e de Computadores pela FEUP. Participa em actividades de investigação no INESC TEC, integrada na área de Sistemas Multimédia do Centro de Telecomunicações e Multimédia. Áreas de interesse: aplicações multimédia sensíveis ao contexto de utilização, em ambientes móveis e heterogéneos; tecnologias semânticas e recomendação de conteúdos; streaming de vídeo 3D e multi-vista; Qualidade de serviço e de experiência em serviços multimédia; televisão e cinema digitais e novos media.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Maria Teresa Andrade
  • Cargo

    Investigador Sénior
  • Desde

    22 novembro 1996
011
Publicações

2023

A Dataset for User Visual Behaviour with Multi-View Video Content

Autores
da Costa, TS; Andrade, MT; Viana, P; Silva, NC;

Publicação
PROCEEDINGS OF THE 2023 PROCEEDINGS OF THE 14TH ACM MULTIMEDIA SYSTEMS CONFERENCE, MMSYS 2023

Abstract
Immersive video applications impose unpractical bandwidth requirements for best-effort networks. With Multi-View(MV) streaming, these can be minimized by resorting to view prediction techniques. SmoothMV is a multi-view system that uses a non-intrusive head tracking mechanism to detect the viewer's interest and select appropriate views. By coupling Neural Networks (NNs) to anticipate the viewer's interest, a reduction of view-switching latency is likely to be obtained. The objective of this paper is twofold: 1) Present a solution for acquisition of gaze data from users when viewing MV content; 2) Describe a dataset, collected with a large-scale testbed, capable of being used to train NNs to predict the user's viewing interest. Tracking data from head movements was obtained from 45 participants using an Intel Realsense F200 camera, with 7 video playlists, each being viewed a minimum of 17 times. This dataset is publicly available to the research community and constitutes an important contribution to reducing the current scarcity of such data. Tools to obtain saliency/heat maps and generate complementary plots are also provided as an open-source software package.

2023

Deep Learning Approach for Seamless Navigation in Multi-View Streaming Applications

Autores
Costa, TS; Viana, P; Andrade, MT;

Publicação
IEEE ACCESS

Abstract
Quality of Experience (QoE) in multi-view streaming systems is known to be severely affected by the latency associated with view-switching procedures. Anticipating the navigation intentions of the viewer on the multi-view scene could provide the means to greatly reduce such latency. The research work presented in this article builds on this premise by proposing a new predictive view-selection mechanism. A VGG16-inspired Convolutional Neural Network (CNN) is used to identify the viewer's focus of attention and determine which views would be most suited to be presented in the brief term, i.e., the near-term viewing intentions. This way, those views can be locally buffered before they are actually needed. To this aim, two datasets were used to evaluate the prediction performance and impact on latency, in particular when compared to the solution implemented in the previous version of our multi-view streaming system. Results obtained with this work translate into a generalized improvement in perceived QoE. A significant reduction in latency during view-switching procedures was effectively achieved. Moreover, results also demonstrated that the prediction of the user's visual interest was achieved with a high level of accuracy. An experimental platform was also established on which future predictive models can be integrated and compared with previously implemented models.

2022

Photo2Video: Semantic-Aware Deep Learning-Based Video Generation from Still Content

Autores
Viana, P; Andrade, MT; Carvalho, P; Vilaca, L; Teixeira, IN; Costa, T; Jonker, P;

Publicação
JOURNAL OF IMAGING

Abstract
Applying machine learning (ML), and especially deep learning, to understand visual content is becoming common practice in many application areas. However, little attention has been given to its use within the multimedia creative domain. It is true that ML is already popular for content creation, but the progress achieved so far addresses essentially textual content or the identification and selection of specific types of content. A wealth of possibilities are yet to be explored by bringing the use of ML into the multimedia creative process, allowing the knowledge inferred by the former to influence automatically how new multimedia content is created. The work presented in this article provides contributions in three distinct ways towards this goal: firstly, it proposes a methodology to re-train popular neural network models in identifying new thematic concepts in static visual content and attaching meaningful annotations to the detected regions of interest; secondly, it presents varied visual digital effects and corresponding tools that can be automatically called upon to apply such effects in a previously analyzed photo; thirdly, it defines a complete automated creative workflow, from the acquisition of a photograph and corresponding contextual data, through the ML region-based annotation, to the automatic application of digital effects and generation of a semantically aware multimedia story driven by the previously derived situational and visual contextual data. Additionally, it presents a variant of this automated workflow by offering to the user the possibility of manipulating the automatic annotations in an assisted manner. The final aim is to transform a static digital photo into a short video clip, taking into account the information acquired. The final result strongly contrasts with current standard approaches of creating random movements, by implementing an intelligent content- and context-aware video.

2022

Improving word embeddings in Portuguese: increasing accuracy while reducing the size of the corpus

Autores
Pinto, JP; Viana, P; Teixeira, I; Andrade, M;

Publicação
PEERJ COMPUTER SCIENCE

Abstract
The subjectiveness of multimedia content description has a strong negative impact on tag-based information retrieval. In our work, we propose enhancing available descriptions by adding semantically related tags. To cope with this objective, we use a word embedding technique based on the Word2Vec neural network parameterized and trained using a new dataset built from online newspapers. A large number of news stories was scraped and pre-processed to build a new dataset. Our target language is Portuguese, one of the most spoken languages worldwide. The results achieved significantly outperform similar existing solutions developed in the scope of different languages, including Portuguese. Contributions include also an online application and API available for external use. Although the presented work has been designed to enhance multimedia content annotation, it can be used in several other application areas.

2021

A Systematic Survey of ML Datasets for Prime CV Research Areas-Media and Metadata

Autores
Castro, HF; Cardoso, JS; Andrade, MT;

Publicação
DATA

Abstract
The ever-growing capabilities of computers have enabled pursuing Computer Vision through Machine Learning (i.e., MLCV). ML tools require large amounts of information to learn from (ML datasets). These are costly to produce but have received reduced attention regarding standardization. This prevents the cooperative production and exploitation of these resources, impedes countless synergies, and hinders ML research. No global view exists of the MLCV dataset tissue. Acquiring it is fundamental to enable standardization. We provide an extensive survey of the evolution and current state of MLCV datasets (1994 to 2019) for a set of specific CV areas as well as a quantitative and qualitative analysis of the results. Data were gathered from online scientific databases (e.g., Google Scholar, CiteSeerX). We reveal the heterogeneous plethora that comprises the MLCV dataset tissue; their continuous growth in volume and complexity; the specificities of the evolution of their media and metadata components regarding a range of aspects; and that MLCV progress requires the construction of a global standardized (structuring, manipulating, and sharing) MLCV "library". Accordingly, we formulate a novel interpretation of this dataset collective as a global tissue of synthetic cognitive visual memories and define the immediately necessary steps to advance its standardization and integration.

Teses
supervisionadas

2023

Weather Video

Autor
João Santos Gama Caldas

Instituição
UP-FEUP

2023

A Visual Computing approach for assisting Film Analysis by using Automatic Stylistic Annotations and Data Visualisation

Autor
Inês Filipa Nunes Teixeira

Instituição
UP-FEUP

2023

Enhanced multiview experiences through remote content selection and dynamic quality adaptation

Autor
Tiago André Queiroz Soares da Costa

Instituição
UP-FEUP

2023

Design Editorial Multiplataforma para a Comunicação de Ciência - O caso da revista INESC TEC Science & Society

Autor
Ana Filipa Marques Mesquita

Instituição
UP-FEUP

2023

Produção de Conteúdos Multimédia e Suporte ao Desenvolvimento do Branding de um Programa de Formação Inovador na Área de Segurança, Contemplando Conteúdos E - Learning, Sensorização, Realidade Virtual e Gaming

Autor
Francisco Monteiro de Magalhães

Instituição
UP-FEUP