Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Maria Teresa Andrade is an Assistant Professor at FEUP, at DEEC. She obtained a degree in Electrotechnical and Computing Engineering in 1986, the MSc in 1992 and the PhD in 2008, at FEUP. She participates in research activities at INESC TEC, integrated in the research team of the Multimedia Systems Area of the Center for Telecommunications and Multimedia. Main interests include context-awareness, mobile and adaptable multimedia applications in heterogeneous environments; 3D and multiview video streaming; quality of service and of experience in multimedia services; semantic technologies and content recommendation; digital television, digital cinema and new media.

Interest
Topics
Details

Details

  • Name

    Maria Teresa Andrade
  • Role

    Senior Researcher
  • Since

    22nd November 1996
011
Publications

2023

A Dataset for User Visual Behaviour with Multi-View Video Content

Authors
da Costa, TS; Andrade, MT; Viana, P; Silva, NC;

Publication
PROCEEDINGS OF THE 2023 PROCEEDINGS OF THE 14TH ACM MULTIMEDIA SYSTEMS CONFERENCE, MMSYS 2023

Abstract
Immersive video applications impose unpractical bandwidth requirements for best-effort networks. With Multi-View(MV) streaming, these can be minimized by resorting to view prediction techniques. SmoothMV is a multi-view system that uses a non-intrusive head tracking mechanism to detect the viewer's interest and select appropriate views. By coupling Neural Networks (NNs) to anticipate the viewer's interest, a reduction of view-switching latency is likely to be obtained. The objective of this paper is twofold: 1) Present a solution for acquisition of gaze data from users when viewing MV content; 2) Describe a dataset, collected with a large-scale testbed, capable of being used to train NNs to predict the user's viewing interest. Tracking data from head movements was obtained from 45 participants using an Intel Realsense F200 camera, with 7 video playlists, each being viewed a minimum of 17 times. This dataset is publicly available to the research community and constitutes an important contribution to reducing the current scarcity of such data. Tools to obtain saliency/heat maps and generate complementary plots are also provided as an open-source software package.

2023

Deep Learning Approach for Seamless Navigation in Multi-View Streaming Applications

Authors
Costa, TS; Viana, P; Andrade, MT;

Publication
IEEE ACCESS

Abstract
Quality of Experience (QoE) in multi-view streaming systems is known to be severely affected by the latency associated with view-switching procedures. Anticipating the navigation intentions of the viewer on the multi-view scene could provide the means to greatly reduce such latency. The research work presented in this article builds on this premise by proposing a new predictive view-selection mechanism. A VGG16-inspired Convolutional Neural Network (CNN) is used to identify the viewer's focus of attention and determine which views would be most suited to be presented in the brief term, i.e., the near-term viewing intentions. This way, those views can be locally buffered before they are actually needed. To this aim, two datasets were used to evaluate the prediction performance and impact on latency, in particular when compared to the solution implemented in the previous version of our multi-view streaming system. Results obtained with this work translate into a generalized improvement in perceived QoE. A significant reduction in latency during view-switching procedures was effectively achieved. Moreover, results also demonstrated that the prediction of the user's visual interest was achieved with a high level of accuracy. An experimental platform was also established on which future predictive models can be integrated and compared with previously implemented models.

2022

Photo2Video: Semantic-Aware Deep Learning-Based Video Generation from Still Content

Authors
Viana, P; Andrade, MT; Carvalho, P; Vilaca, L; Teixeira, IN; Costa, T; Jonker, P;

Publication
JOURNAL OF IMAGING

Abstract
Applying machine learning (ML), and especially deep learning, to understand visual content is becoming common practice in many application areas. However, little attention has been given to its use within the multimedia creative domain. It is true that ML is already popular for content creation, but the progress achieved so far addresses essentially textual content or the identification and selection of specific types of content. A wealth of possibilities are yet to be explored by bringing the use of ML into the multimedia creative process, allowing the knowledge inferred by the former to influence automatically how new multimedia content is created. The work presented in this article provides contributions in three distinct ways towards this goal: firstly, it proposes a methodology to re-train popular neural network models in identifying new thematic concepts in static visual content and attaching meaningful annotations to the detected regions of interest; secondly, it presents varied visual digital effects and corresponding tools that can be automatically called upon to apply such effects in a previously analyzed photo; thirdly, it defines a complete automated creative workflow, from the acquisition of a photograph and corresponding contextual data, through the ML region-based annotation, to the automatic application of digital effects and generation of a semantically aware multimedia story driven by the previously derived situational and visual contextual data. Additionally, it presents a variant of this automated workflow by offering to the user the possibility of manipulating the automatic annotations in an assisted manner. The final aim is to transform a static digital photo into a short video clip, taking into account the information acquired. The final result strongly contrasts with current standard approaches of creating random movements, by implementing an intelligent content- and context-aware video.

2022

Improving word embeddings in Portuguese: increasing accuracy while reducing the size of the corpus

Authors
Pinto, JP; Viana, P; Teixeira, I; Andrade, M;

Publication
PEERJ COMPUTER SCIENCE

Abstract
The subjectiveness of multimedia content description has a strong negative impact on tag-based information retrieval. In our work, we propose enhancing available descriptions by adding semantically related tags. To cope with this objective, we use a word embedding technique based on the Word2Vec neural network parameterized and trained using a new dataset built from online newspapers. A large number of news stories was scraped and pre-processed to build a new dataset. Our target language is Portuguese, one of the most spoken languages worldwide. The results achieved significantly outperform similar existing solutions developed in the scope of different languages, including Portuguese. Contributions include also an online application and API available for external use. Although the presented work has been designed to enhance multimedia content annotation, it can be used in several other application areas.

2021

A Systematic Survey of ML Datasets for Prime CV Research Areas-Media and Metadata

Authors
Castro, HF; Cardoso, JS; Andrade, MT;

Publication
DATA

Abstract
The ever-growing capabilities of computers have enabled pursuing Computer Vision through Machine Learning (i.e., MLCV). ML tools require large amounts of information to learn from (ML datasets). These are costly to produce but have received reduced attention regarding standardization. This prevents the cooperative production and exploitation of these resources, impedes countless synergies, and hinders ML research. No global view exists of the MLCV dataset tissue. Acquiring it is fundamental to enable standardization. We provide an extensive survey of the evolution and current state of MLCV datasets (1994 to 2019) for a set of specific CV areas as well as a quantitative and qualitative analysis of the results. Data were gathered from online scientific databases (e.g., Google Scholar, CiteSeerX). We reveal the heterogeneous plethora that comprises the MLCV dataset tissue; their continuous growth in volume and complexity; the specificities of the evolution of their media and metadata components regarding a range of aspects; and that MLCV progress requires the construction of a global standardized (structuring, manipulating, and sharing) MLCV "library". Accordingly, we formulate a novel interpretation of this dataset collective as a global tissue of synthetic cognitive visual memories and define the immediately necessary steps to advance its standardization and integration.

Supervised
thesis

2023

Enhanced multiview experiences through remote content selection and dynamic quality adaptation

Author
Tiago André Queiroz Soares da Costa

Institution
UP-FEUP

2023

Design Editorial Multiplataforma para a Comunicação de Ciência - O caso da revista INESC TEC Science & Society

Author
Ana Filipa Marques Mesquita

Institution
UP-FEUP

2023

Produção de Conteúdos Multimédia e Suporte ao Desenvolvimento do Branding de um Programa de Formação Inovador na Área de Segurança, Contemplando Conteúdos E - Learning, Sensorização, Realidade Virtual e Gaming

Author
Francisco Monteiro de Magalhães

Institution
UP-FEUP

2023

Bandwidth Prediction for Adaptive Video Streaming

Author
Gustavo Manuel Esteves Pelayo

Institution
UP-FEUP

2023

Weather Video

Author
João Santos Gama Caldas

Institution
UP-FEUP