Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Gilberto Bernardes é doutorado em Media Digitais (2014) pela Universidade do Porto sob os auspícios da Universidade do Texas em Austin e mestre em Música 'cum Lauda' (2008) pela Amsterdamse Hogeschool voor de Kunsten. Bernardes é atualmente Professor Auxiliar na Universidade do Porto e Investigador Sénior no INESC TEC onde lidera o Laboratório de Computação Sonora e Musical. Conta com mais de 90 publicações, das quais 14 são artigos em revistas com elevado fator de impacto (maioritariamente Q1 e Q2 na Scimago) e catorze capítulos de livros. A Bernardes interagiu com 152 colaboradores internacionais na coautoria de artigos científicos. Bernardes tem contribuído continuamente para a formação de jovens cientistas, uma vez que orienta atualmente seis teses de doutoramento e concluiu mais de 40 dissertações de mestrado.


Recebeu nove prémios, incluindo o Prémio Fraunhofer Portugal para a melhor tese de doutoramento e vários prémios de melhor artigo em conferências (e.g., DCE e CMMR). Participou em 12 projectos de I&D como investigador sénior e júnior. Nos últimos oito anos, após a defesa do seu doutoramento, Bernardes conseguiu atrair financiamento competitivo para realizar um projeto de pós-doutoramento financiado pela FCT e uma bolsa exploratória para um protótipo de I&D baseado no mercado. Atualmente, lidera a equipa portuguesa (Work Package leader) no INESC TEC no projeto Horizonte Europa EU-DIGIFOLK, e no projeto Erasmus+ Open Minds. Nas suas actividades artísticas, Bernardes tem actuado em algumas salas de música de renome, tais como Bimhuis, Concertgebouw, Casa da Música, Berklee College of Music, New York University, e Seoul Computer Music Festival.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Gilberto Bernardes Almeida
  • Cargo

    Investigador Sénior
  • Desde

    14 julho 2014
005
Publicações

2025

Evaluation of Lyrics Extraction from Folk Music Sheets Using Vision Language Models (VLMs)

Autores
Sales Mendes, A; Lozano Murciego, Á; Silva, LA; Jiménez Bravo, M; Navarro Cáceres, M; Bernardes, G;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
Monodic folk music has traditionally been preserved in physical documents. It constitutes a vast archive that needs to be digitized to facilitate comprehensive analysis using AI techniques. A critical component of music score digitization is the transcription of lyrics, an extensively researched process in Optical Character Recognition (OCR) and document layout analysis. These fields typically require the development of specific models that operate in several stages: first, to detect the bounding boxes of specific texts, then to identify the language, and finally, to recognize the characters. Recent advances in vision language models (VLMs) have introduced multimodal capabilities, such as processing images and text, which are competitive with traditional OCR methods. This paper proposes an end-to-end system for extracting lyrics from images of handwritten musical scores. We aim to evaluate the performance of two state-of-the-art VLMs to determine whether they can eliminate the need to develop specialized text recognition and OCR models for this task. The results of the study, obtained from a dataset in a real-world application environment, are presented along with promising new research directions in the field. This progress contributes to preserving cultural heritage and opens up new possibilities for global analysis and research in folk music. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2024

Acting Emotions: a comprehensive dataset of elicited emotions

Autores
Aly, L; Godinho, L; Bota, P; Bernardes, G; da Silva, HP;

Publicação
SCIENTIFIC DATA

Abstract
Emotions encompass physiological systems that can be assessed through biosignals like electromyography and electrocardiography. Prior investigations in emotion recognition have primarily focused on general population samples, overlooking the specific context of theatre actors who possess exceptional abilities in conveying emotions to an audience, namely acting emotions. We conducted a study involving 11 professional actors to collect physiological data for acting emotions to investigate the correlation between biosignals and emotion expression. Our contribution is the DECEiVeR (DatasEt aCting Emotions Valence aRousal) dataset, a comprehensive collection of various physiological recordings meticulously curated to facilitate the recognition of a set of five emotions. Moreover, we conduct a preliminary analysis on modeling the recognition of acting emotions from raw, low- and mid-level temporal and spectral data and the reliability of physiological data across time. Our dataset aims to leverage a deeper understanding of the intricate interplay between biosignals and emotional expression. It provides valuable insights into acting emotion recognition and affective computing by exposing the degree to which biosignals capture emotions elicited from inner stimuli.

2024

Exploring Mode Identification in Irish Folk Music with Unsupervised Machine Learning and Template-Based Techniques

Autores
Navarro-Cáceres, JJ; Carvalho, N; Bernardes, G; Jiménez-Bravo, DM; Navarro-Cáceres, M;

Publicação
MATHEMATICS AND COMPUTATION IN MUSIC, MCM 2024

Abstract
Extensive computational research has been dedicated to detecting keys and modes in tonal Western music within the major and minor modes. Little research has been dedicated to other modes and musical expressions, such as folk or non-Western music. This paper tackles this limitation by comparing traditional template-based with unsupervised machine-learning methods for diatonic mode detection within folk music. Template-based methods are grounded in music theory and cognition and use predefined profiles from which we compare a musical piece. Unsupervised machine learning autonomously discovers patterns embedded in the data. As a case study, the authors apply the methods to a dataset of Irish folk music called The Session on four diatonic modes: Ionian, Dorian, Mixolydian, and Aeolian. Our evaluation assesses the performance of template-based and unsupervised methods, reaching an average accuracy of about 80%. We discuss the applicability of the methods, namely the potential of unsupervised learning to process unknown musical sources beyond modes with predefined templates.

2024

Fourier Qualia Wavescapes: Hierarchical Analyses of Set Class Quality and Ambiguity

Autores
Pereira, S; Affatato, G; Bernardes, G; Moss, FC;

Publicação
MATHEMATICS AND COMPUTATION IN MUSIC, MCM 2024

Abstract
We introduce a novel perspective on set-class analysis combining the DFT magnitudes with the music visualisation technique of wavescapes. With such a combination, we create a visual representation of a piece's multidimensional qualia, where different colours indicate saliency in chromaticity, diadicity, triadicity, octatonicity, diatonicity, and whole-tone quality. At the centre of our methods are: 1) the formal definition of the Fourier Qualia Space (FQS), 2) its particular ordering of DFT coefficients that delineate regions linked to different musical aesthetics, and 3) the mapping of such regions into a coloured wavescape. Furthermore, we demonstrate the intrinsic capability of the FQS to express qualia ambiguity and map it into a synopsis wavescape. Finally, we showcase the application of our methods by presenting a few analytical remarks on Bach's Three-part Invention BWV 795, Debussy's Reflets dans l'eau, andWebern's Four Pieces for Violin and Piano, Op. 7, No. 1, unveiling increasingly ambiguous wavescapes.

2024

Fourier (Common-Tone) Phase Spaces are in Tune with Variational Autoencoders' Latent Space

Autores
Carvalho, N; Bernardes, G;

Publicação
MATHEMATICS AND COMPUTATION IN MUSIC, MCM 2024

Abstract
Expanding upon the potential of generative machine learning to create atemporal latent space representations of musical-theoretical and cognitive interest, we delve into their explainability by formulating and testing hypotheses on their alignment with DFT phase spaces from {0, 1}(12) pitch classes and {0, 1}(128) pitch distributions - capturing common-tone tonal functional harmony and parsimonious voice-leading principles, respectively. We use 371 J.S. Bach chorales as a benchmark to train a Variational Autoencoder on a representative piano roll encoding. The Spearman rank correlation between the latent space and the two before-mentioned DFT phase spaces exhibits a robust rank association of approximately .65 +/- .05 for pitch classes and .61 +/- .05 for pitch distributions, denoting an effective preservation of harmonic functional clusters per region and parsimonious voice-leading. Furthermore, our analysis prompts essential inquiries about the stylistic characteristics inferred from the rank deviations to the DFT phase space and the balance between the two DFT phase spaces.

Teses
supervisionadas

2023

An interactive and digital puppeteering interface for new musical expression (IDPI)

Autor
Hibiki Mukai

Instituição
UP-FEUP

2023

Synthesizing Soundscapes from Textual Input: Development and Comparison of Generative AI Models

Autor
Márcio Cláudio Silva Duarte

Instituição
UP-FEUP

2023

AVE - Assessing Ambiguity in Speech-based Affective Virtual Environments

Autor
Jorge Federico Forero Rodríguez

Instituição
UP-FEUP

2023

AVALIANDO PREFERÊNCIAS MUSICAIS DE CRIANÇAS NO ESPECTRO AUTISTA: IMPLICAÇÕES PARA A TERAPIA

Autor
Natália Isabel dos Santos

Instituição
UP-FEUP

2023

Promoting Popular Music Engagement Through Spatial Audio

Autor
José Ricardo Barboza

Instituição
UP-FEUP