Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Gilberto Bernardes Almeida

2019

Seed: Resynthesizing environmental sounds from examples

Autores
Bernardes, G; Aly, L; Davies, MEP;

Publicação
SMC 2016 - 13th Sound and Music Computing Conference, Proceedings

Abstract
In this paper we present SEED, a generative system capable of arbitrarily extending recorded environmental sounds while preserving their inherent structure. The system architecture is grounded in concepts from concatenative sound synthesis and includes three top-level modules for segmentation, analysis, and generation. An input audio signal is first temporally segmented into a collection of audio segments, which are then reduced into a dictionary of audio classes by means of an agglomerative clustering algorithm. This representation, together with a concatenation cost between audio segment boundaries, is finally used to generate sequences of audio segments with arbitrarily long duration. The system output can be varied in the generation process by the simple and yet effective parametric control over the creation of the natural, temporally coherent, and varied audio renderings of environmental sounds. Copyright: © 2016 First author et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License 3.0 Unported, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

2019

ChordAIS: An assistive system for the generation of chord progressions with an artificial immune system

Autores
Navarro Caceres, M; Caetano, M; Bernardes, G; de Castro, LN;

Publicação
SWARM AND EVOLUTIONARY COMPUTATION

Abstract
Chord progressions play an important role in Western tonal music. For a novice composer, the creation of chord progressions can be challenging because it involves many subjective factors, such as the musical context, personal preference and aesthetic choices. This work proposes ChordAIS, an interactive system that assists the user in generating chord progressions by iteratively adding new chords. At each iteration a search for the next candidate chord is performed in the Tonal Interval Space (TIS), where distances capture perceptual features of pitch configurations on different levels, such as musical notes, chords, and scales. We use an artificial immune system (AIS) called opt-aiNet to search for candidate chords by optimizing an objective function that encodes desirable musical properties of chord progressions as distances in the TIS. Opt-aiNet is capable of finding multiple optima of multi-modal functions simultaneously, resulting in multiple good-quality candidate chords which can be added to the progression by the user. To validate ChordAIS, we performed different experiments and a listening test to evaluate the perceptual quality of the candidate chords proposed by ChordAIS. Most listeners rated the chords proposed by ChordAIS as better candidates for progressions than the chords discarded by ChordAIS. Then, we compared ChordAIS with two similar systems, ConChord and ChordGA, which uses a standard GA instead of opt-aiNet. A user test showed that ChordAIS was preferred over ChordGA and Conchord. According to the results, ChordAlS was deemed capable of assisting the users in the generation of tonal chord progressions by proposing good-quality candidates in all the keys tested.

2019

MixMash

Autores
Maçãs, C; Rodrigues, A; Bernardes, G; Machado, P;

Publicação
International Journal of Art, Culture and Design Technologies

Abstract
This article presents MixMash, an interactive tool which streamlines the process of music mashup creation by assisting users in the process of finding compatible music from a large collection of audio tracks. It extends the harmonic mixing method by Bernardes, Davies and Guedes with novel degrees of harmonic, rhythmic, spectral, and timbral similarity metrics. Furthermore, it revises and improves some interface design limitations identified in the former model software implementation. A new user interface design based on cross-modal associations between musical content analysis and information visualisation is presented. In this graphic model, all tracks are represented as nodes where distances and edge connections display their harmonic compatibility as a result of a force-directed graph. Besides, a visual language is defined to enhance the tool's usability and foster creative endeavour in the search of meaningful music mashups.

2020

Objective Evaluation of Tonal Fitness for Chord Progressions Using the Tonal Interval Space

Autores
Cáceres, MN; Caetano, MF; Bernardes, G;

Publicação
Artificial Intelligence in Music, Sound, Art and Design - 9th International Conference, EvoMUSART 2020, Held as Part of EvoStar 2020, Seville, Spain, April 15-17, 2020, Proceedings

Abstract
Chord progressions are core elements of Western tonal harmony regulated by multiple theoretical and perceptual principles. Ideally, objective measures to evaluate chord progressions should reflect their tonal fitness. In this work, we propose an objective measure of the fitness of a chord progression within the Western tonal context computed in the Tonal Interval Space, where distances capture tonal music principles. The measure considers four parameters, namely tonal pitch distance, consonance, hierarchical tension and voice leading between the chords in the progression. We performed a listening test to perceptually assess the proposed tonal fitness measure across different chord progressions, and compared the results with existing related models. The perceptual rating results show that our objective measure improves the estimation of a chord progression’s tonal fitness in comparison with existing models. © Springer Nature Switzerland AG 2020.

2019

Dynamic Music Generation, Audio Analysis-Synthesis Methods

Autores
Bernardes, G; Cocharro, D;

Publicação
Encyclopedia of Computer Graphics and Games

Abstract

2020

Physics-based Concatenative Sound Synthesis of Photogrammetric models for Aural and Haptic Feedback in Virtual Environments

Autores
Magalhaes, E; Jacob, J; Nilsson, N; Nordahl, R; Bernardes, G;

Publicação
2020 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES WORKSHOPS (VRW 2020)

Abstract
We present a novel physics-based concatenative sound synthesis (CSS) methodology for congruent interactions across physical, graphical, aural and haptic modalities in Virtual Environments. Navigation in aural and haptic corpora of annotated audio units is driven by user interactions with highly realistic photogrammetric based models in a game engine, where automated and interactive positional, physics and graphics data are supported. From a technical perspective, the current contribution expands existing CSS frameworks in avoiding mapping or mining the annotation data to real-time performance attributes, while guaranteeing degrees of novelty and variation for the same gesture.

  • 3
  • 10