2020
Autores
Viana, P; Carvalho, P; Andrade, MT; Jonker, PP; Papanikolaou, V; Teixeira, IN; Vilaça, L; Pinto, JP; Costa, T;
Publicação
MM '20: The 28th ACM International Conference on Multimedia, Virtual Event / Seattle, WA, USA, October 12-16, 2020
Abstract
Multimedia content production is nowadays widespread due to technological advances, namely supported by smartphones and social media. Although the massive amount of media content brings new opportunities to the industry, it also obfuscates the relevance of marketing content, meant to maintain and lure new audiences. This leads to an emergent necessity of producing these kinds of contents as quickly and engagingly as possible. Creating these automatically would decrease both the production costs and time, particularly by using static media for the creation of short storytelling animated clips. We propose an innovative approach that uses context and content information to transform a still photo into an appealing context-aware video clip. Thus, our solution presents a contribution to the state-of-the-art in computer vision and multimedia technologies and assists content creators with a value-added service to automatically build rich contextualized multimedia stories from single photographs. © 2020 Owner/Author.
2019
Autores
Marcal, J; Borges, MM; Viana, P; Carvalho, PS;
Publicação
13TH INTERNATIONAL TECHNOLOGY, EDUCATION AND DEVELOPMENT CONFERENCE (INTED2019)
Abstract
Audiovisual didactic content has been in recent years disseminated, in the Physics domain, mainly through YouTube platform. Many aspects of video production activities can increase students' selfesteem, increase their satisfaction with the learning experience, promote a positive attitude towards the subject, provide weaker students with a broad individual tutoring, encouraging students to discuss with each other, exchange their opinions, and compare the results of lab activities. On the other hand, video can support research activities, offering the researcher access to a rich data aggregation to investigate the learning processes. The main objective of this study is to understand the use of online tools in the context of teaching, for this we make a correlation between studies using audiovisual resources in the study of Physics, and our testbed with an online video annotation tool. Results show that students expressed a gain from oral lectures and access to new sources of learning.
2020
Autores
Andrade, MT; Santos, P; Costa, TS; Freitas, L; Golestani, S; Viana, P; Rodrigues, J; Ulisses, A;
Publicação
Proceedings - 2020 TRON Symposium, TRONSHOW 2020
Abstract
The media sector is constantly evolving and, in the last few years, such evolution has been driven by a number of convergence paradigms, notably, that between broadband and broadcast technologies with the introduction of IT and IP technology. The present trend is to switch totally from a closed niche that uses highly specialized equipment to off-the-shelf IT-centric solutions, offering easy configuration and remote operation. The aim is to enable common computers to be turned into highly capable media devices and act as connected objects adopting an IoT-like paradigm. This vision, though, is not implemented easily, given that most media industry professionals do not yet feel comfortable operating in the IT technology space and also due to the stringent requirements that exist in this industry. The Joint Task Force on Networked Media is defining specifications that aim at overcoming such existing barriers. In this article we present a novel solution that follows the guidelines delivered by this group to set up a remotely operated media production facility, totally based on IP and IT technology, constituting a step forward the realization of the IoT concept in professional media environments. The focus is on two complementary components, namely, the GUI Agent and the MW Agent, which are not covered by the defined specifications but that are crucial to speed up the deployment of concrete solutions that can be easily operated by non-IT and non-IP experts in a transparent and ubiquitous way. © 2020 TRON Forum.
2021
Autores
da Costa, TS; Andrade, MT; Viana, P;
Publicação
PROCEEDINGS OF THE 2021 INTERNATIONAL WORKSHOP ON IMMERSIVE MIXED AND VIRTUAL ENVIRONMENT SYSTEMS (MMVE '21)
Abstract
Multi-view has the potential to offer immersive viewing experiences to users, as an alternative to 360 degrees and Virtual Reality (VR) applications. In multi-view, a limited number of camera views are sent to the client and missing views are synthesised locally. Given the substantial complexity associated to view synthesis, considerable attention has been given to optimise the trade-off between bandwidth gains and computing resources, targeting smooth navigation and viewing quality. A still relatively unexplored field is the optimisation of the way navigation interactivity is achieved, i.e. how the user indicates to the system the selection of new viewpoints. In this article, we introduce SmoothMV, a multi-view system that uses a non-intrusive head tracking approach to enhance navigation and Quality of Experience (QoE) of the viewer. It relies on a novel Hot&Cold matrix concept to translate head positioning data into viewing angle selections. Streaming of selected views is done using MPEG-DASH, where a proposed extension to the standard descriptors enables to achieve consistent and flexible view identification.
2020
Autores
Costa, TS; Andrade, MT; Viana, P;
Publicação
Intelligent Systems Design and Applications - 20th International Conference on Intelligent Systems Design and Applications (ISDA 2020) held December 12-15, 2020
Abstract
2021
Autores
Almeida, J; Vilaca, L; Teixeira, IN; Viana, P;
Publicação
APPLIED SCIENCES-BASEL
Abstract
Understanding how acting bridges the emotional bond between spectators and films is essential to depict how humans interact with this rapidly growing digital medium. In recent decades, the research community made promising progress in developing facial expression recognition (FER) methods. However, no emphasis has been put in cinematographic content, which is complex by nature due to the visual techniques used to convey the desired emotions. Our work represents a step towards emotion identification in cinema through facial expressions' analysis. We presented a comprehensive overview of the most relevant datasets used for FER, highlighting problems caused by their heterogeneity and to the inexistence of a universal model of emotions. Built upon this understanding, we evaluated these datasets with a standard image classification models to analyze the feasibility of using facial expressions to determine the emotional charge of a film. To cope with the problem of lack of datasets for the scope under analysis, we demonstrated the feasibility of using a generic dataset for the training process and propose a new way to look at emotions by creating clusters of emotions based on the evidence obtained in the experiments.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.