2016
Authors
Bernardes, G; Cocharro, D; Caetano, M; Guedes, C; Davies, MEP;
Publication
JOURNAL OF NEW MUSIC RESEARCH
Abstract
In this paper we present a 12-dimensional tonal space in the context of the Tonnetz, Chew's Spiral Array, and Harte's 6-dimensional Tonal Centroid Space. The proposed Tonal Interval Space is calculated as the weighted Discrete Fourier Transform of normalized 12-element chroma vectors, which we represent as six circles covering the set of all possible pitch intervals in the chroma space. By weighting the contribution of each circle (and hence pitch interval) independently, we can create a space in which angular and Euclidean distances among pitches, chords, and regions concur with music theory principles. Furthermore, the Euclidean distance of pitch configurations from the centre of the space acts as an indicator of consonance.
2016
Authors
Bernardes, G; Cocharro, D; Guedes, C; Davies, MEP;
Publication
Music, Mind, and Embodiment
Abstract
We present Conchord, a system for real-time automatic generation of musical harmony through navigation in a novel 12-dimensional Tonal Interval Space. In this tonal space, angular and Euclidean distances among vectors representing multi-level pitch configurations equate with music theory principles, and vector norms acts as an indicator of consonance. Building upon these attributes, users can intuitively and dynamically define a collection of chords based on their relation to a tonal center (or key) and their consonance level. Furthermore, two algorithmic strategies grounded in principles from function and root-motion harmonic theories allow the generation of chord progressions characteristic of Western tonal music.
2016
Authors
Bernardes, G; Cocharro, D; Guedes, C; Davies, MEP;
Publication
COMPUTERS IN ENTERTAINMENT
Abstract
We present D'accord, a generative music system for creating harmonically compatible accompaniments of symbolic and musical audio inputs with any number of voices, instrumentation, and complexity. The main novelty of our approach centers on offering multiple ranked solutions between a database of pitch configurations and a given musical input based on tonal pitch relatedness and consonance indicators computed in a perceptually motivated Tonal Interval Space. Furthermore, we detail a method to estimate the key of symbolic and musical audio inputs based on attributes of the space, which underpins the generation of key-related pitch configurations. The system is controlled via an adaptive interface implemented for Ableton Live, MAX, and Pure Data, which facilitates music creation for users regardless of music expertise and simultaneously serves as a performance, entertainment, and learning tool. We perform a threefold evaluation of D'accord, which assesses the level of accuracy of our key-finding algorithm, the user enjoyment of generated harmonic accompaniments, and the usability and learnability of the system.
2017
Authors
Bernardes, G; Davies, MEP; Guedes, C;
Publication
2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Abstract
In this paper we present the INESC Key Detection (IKD) system which incorporates a novel method for dynamically biasing key mode estimation using the spatial displacement of beat-synchronous Tonal Interval Vectors (TIVs). We evaluate the performance of the IKD system at finding the global key on three annotated audio datasets and using three key-defining profiles. Results demonstrate the effectiveness of the mode bias in favoring either the major or minor mode, thus allowing users to fine tune this variable to improve correct key estimates on style-specific music datasets or to balance predictions across key modes on unknown input sources.
2013
Authors
Bernardes, G; Guedes, C; Pennycook, B;
Publication
FROM SOUNDS TO MUSIC AND EMOTIONS
Abstract
This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system's machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decision-making during performance by revealing musical patterns and temporal organizations of the corpus.
2014
Authors
Sioros, G; Guedes, C;
Publication
SOUND, MUSIC, AND MOTION
Abstract
Syncopation is a rhythmic phenomenon present in various musical styles and cultures. We present here a set of simple rhythmic transformations that can serve as a formalized model for syncopation. The transformations are based on fundamental features of the musical meter and syncopation, as seen from a cognitive and a musical perspective. Based on this model, rhythmic patterns can be organized in tree structures where patterns are interconnected through simple transformations. A Max4Live device is presented as a creative application of the model. It manipulates the syncopation of midi "clips" by automatically de-syncopating and syncopating the midi notes.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.