Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Gilberto Bernardes é doutorado em Media Digitais (2014) pela Universidade do Porto sob os auspícios da Universidade do Texas em Austin e mestre em Música 'cum Lauda' (2008) pela Amsterdamse Hogeschool voor de Kunsten. Bernardes é atualmente Professor Auxiliar na Universidade do Porto e Investigador Sénior no INESC TEC onde lidera o Laboratório de Computação Sonora e Musical. Conta com mais de 90 publicações, das quais 14 são artigos em revistas com elevado fator de impacto (maioritariamente Q1 e Q2 na Scimago) e catorze capítulos de livros. A Bernardes interagiu com 152 colaboradores internacionais na coautoria de artigos científicos. Bernardes tem contribuído continuamente para a formação de jovens cientistas, uma vez que orienta atualmente seis teses de doutoramento e concluiu mais de 40 dissertações de mestrado.


Recebeu nove prémios, incluindo o Prémio Fraunhofer Portugal para a melhor tese de doutoramento e vários prémios de melhor artigo em conferências (e.g., DCE e CMMR). Participou em 12 projectos de I&D como investigador sénior e júnior. Nos últimos oito anos, após a defesa do seu doutoramento, Bernardes conseguiu atrair financiamento competitivo para realizar um projeto de pós-doutoramento financiado pela FCT e uma bolsa exploratória para um protótipo de I&D baseado no mercado. Atualmente, lidera a equipa portuguesa (Work Package leader) no INESC TEC no projeto Horizonte Europa EU-DIGIFOLK, e no projeto Erasmus+ Open Minds. Nas suas actividades artísticas, Bernardes tem actuado em algumas salas de música de renome, tais como Bimhuis, Concertgebouw, Casa da Música, Berklee College of Music, New York University, e Seoul Computer Music Festival.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Gilberto Bernardes Almeida
  • Cargo

    Investigador Sénior
  • Desde

    14 julho 2014
  • Nacionalidade

    Portugal
  • Contactos

    +351222094299
    gilberto.b.almeida@inesctec.pt
005
Publicações

2024

Acting Emotions: a comprehensive dataset of elicited emotions

Autores
Aly, L; Godinho, L; Bota, P; Bernardes, G; da Silva, HP;

Publicação
SCIENTIFIC DATA

Abstract
Emotions encompass physiological systems that can be assessed through biosignals like electromyography and electrocardiography. Prior investigations in emotion recognition have primarily focused on general population samples, overlooking the specific context of theatre actors who possess exceptional abilities in conveying emotions to an audience, namely acting emotions. We conducted a study involving 11 professional actors to collect physiological data for acting emotions to investigate the correlation between biosignals and emotion expression. Our contribution is the DECEiVeR (DatasEt aCting Emotions Valence aRousal) dataset, a comprehensive collection of various physiological recordings meticulously curated to facilitate the recognition of a set of five emotions. Moreover, we conduct a preliminary analysis on modeling the recognition of acting emotions from raw, low- and mid-level temporal and spectral data and the reliability of physiological data across time. Our dataset aims to leverage a deeper understanding of the intricate interplay between biosignals and emotional expression. It provides valuable insights into acting emotion recognition and affective computing by exposing the degree to which biosignals capture emotions elicited from inner stimuli.

2024

Exploring Mode Identification in Irish Folk Music with Unsupervised Machine Learning and Template-Based Techniques

Autores
Navarro Cáceres, JJ; Carvalho, N; Bernardes, G; Jiménez Bravo, M; Navarro Cáceres, M;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
Extensive computational research has been dedicated to detecting keys and modes in tonal Western music within the major and minor modes. Little research has been dedicated to other modes and musical expressions, such as folk or non-Western music. This paper tackles this limitation by comparing traditional template-based with unsupervised machine-learning methods for diatonic mode detection within folk music. Template-based methods are grounded in music theory and cognition and use predefined profiles from which we compare a musical piece. Unsupervised machine learning autonomously discovers patterns embedded in the data. As a case study, the authors apply the methods to a dataset of Irish folk music called The Session on four diatonic modes: Ionian, Dorian, Mixolydian, and Aeolian. Our evaluation assesses the performance of template-based and unsupervised methods, reaching an average accuracy of about 80%. We discuss the applicability of the methods, namely the potential of unsupervised learning to process unknown musical sources beyond modes with predefined templates. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.

2024

Fourier Qualia Wavescapes: Hierarchical Analyses of Set Class Quality and Ambiguity

Autores
Pereira, S; Affatato, G; Bernardes, G; Moss, FC;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract

2024

Fourier (Common-Tone) Phase Spaces are in Tune with Variational Autoencoders’ Latent Space

Autores
Carvalho, N; Bernardes, G;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
Expanding upon the potential of generative machine learning to create atemporal latent space representations of musical-theoretical and cognitive interest, we delve into their explainability by formulating and testing hypotheses on their alignment with DFT phase spaces from {0,1}12 pitch classes and {0,1}128 pitch distributions – capturing common-tone tonal functional harmony and parsimonious voice-leading principles, respectively. We use 371 J.S. Bach chorales as a benchmark to train a Variational Autoencoder on a representative piano roll encoding. The Spearman rank correlation between the latent space and the two before-mentioned DFT phase spaces exhibits a robust rank association of approximately .65±.05 for pitch classes and .61±.05 for pitch distributions, denoting an effective preservation of harmonic functional clusters per region and parsimonious voice-leading. Furthermore, our analysis prompts essential inquiries about the stylistic characteristics inferred from the rank deviations to the DFT phase space and the balance between the two DFT phase spaces. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.

2024

Modal Pitch Space: A Computational Model of Melodic Pitch Attraction in Folk Music

Autores
Bernardes, G; Carvalho, N;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
We introduce a computational model that quantifies melodic pitch attraction in diatonic modal folk music, extending Lerdahl’s Tonal Pitch Space. The model incorporates four melodic pitch indicators: vertical embedding distance, horizontal step distance, semitone interval distance, and relative stability. Its scalability is exclusively achieved through prior mode and tonic information, eliminating the need in existing models for additional chordal context. Noteworthy contributions encompass the incorporation of empirically-driven folk music knowledge and the calculation of indicator weights. Empirical evaluation, spanning Dutch, Irish, and Spanish folk traditions across Ionian, Dorian, Mixolydian, and Aeolian modes, uncovers a robust linear relationship between melodic pitch transitions and the pitch attraction model infused with empirically-derived knowledge. Indicator weights demonstrate cross-tradition generalizability, highlighting the significance of vertical embedding distance and relative stability. In contrast, semitone and horizontal step distances assume residual and null functions, respectively. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.

Teses
supervisionadas

2023

Content-Based (Re)Creation of Loops for Music Performance

Autor
Diogo Miguel Filipe Cocharro

Instituição
UP-FEUP

2023

The sonification of genetic variability as a communication tool

Autor
Clara Rodrigues Tapadas

Instituição
UP-FEUP

2023

AVE - Assessing Ambiguity in Speech-based Affective Virtual Environments

Autor
Jorge Federico Forero Rodríguez

Instituição
UP-FEUP

2023

Sound Designing Brands and Establishing Sonic Identities: the Sons Em Trânsito Music Agency

Autor
João Pedro Melo Albino de Sá Cardielos

Instituição
UP-FEUP

2023

Promoting Popular Music Engagement Through Spatial Audio

Autor
José Ricardo Barboza

Instituição
UP-FEUP