2009
Authors
Stark, AM; Davies, MEP; Plumbley, MD;
Publication
Proceedings of the 12th International Conference on Digital Audio Effects, DAFx 2009
Abstract
In this paper we present a model for beat-synchronous analysis of musical audio signals. Introducing a real-time beat tracking model with performance comparable to offline techniques, we discuss its application to the analysis of musical performances segmented by beat. We discuss the various design choices for beat-synchronous analysis and their implications for real-time implementations before presenting some beat-synchronous harmonic analysis examples. We make available our beat tracker and beat-synchronous analysis techniques as externals for Max/MSP.
2008
Authors
Hockman, JA; Bello, JP; Davies, MEP; Plumbley, MD;
Publication
Proceedings - 11th International Conference on Digital Audio Effects, DAFx 2008
Abstract
Time-scale transformations of audio signals have traditionally relied exclusively upon manipulations of tempo. We present a novel technique for automatic mixing and synchronization between two musical signals. In this transformation, the original signal assumes the tempo, meter, and rhythmic structure of the model signal, while the extracted downbeats and salient intra-measure infrastructure of the original are maintained.
2008
Authors
Jacobson, K; Davies, M; Sandler, M;
Publication
Audio Engineering Society - 125th Audio Engineering Society Convention 2008
Abstract
In large audio collections, it is common to store audio content with perceptual encoding. However, encoding parameters may vary from collection to collection or even within a collection - using different bit rates, sample rates, codecs, etc. We evaluate the effect of various audio encodings on the onset detection task. We show that audio-based onset detection methods are surprisingly robust in the presence of MP3 encoded audio. Statistically significant changes in onset detection accuracy only occur at bit-rates lower than 32kbps.
2020
Authors
Ramires, A; Bernardes, G; Davies, MEP; Serra, X;
Publication
CoRR
Abstract
In this paper, we present TIV.lib, an open-source library for the content-based tonal description of musical audio signals. Its main novelty relies on the perceptually-inspired Tonal Interval Vector space based on the Discrete Fourier transform, from which multiple instantaneous and global representations, descriptors and metrics are computed-e.g., harmonic change, dissonance, diatonicity, and musical key. The library is cross-platform, implemented in Python and the graphical programming language Pure Data, and can be used in both online and offline scenarios. Of note is its potential for enhanced Music Information Retrieval, where tonal descriptors sit at the core of numerous methods and applications.
2018
Authors
Ramires, A; Cocharro, D; Davies, MEP;
Publication
CoRR
Abstract
2018
Authors
Ramires, A; Penha, R; Davies, MEP;
Publication
CoRR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.