2007
Authors
Davies, MEP; Plumbley, MD;
Publication
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING
Abstract
We present a simple and efficient method for beat tracking of musical audio. With the aim of replicating the human ability of tapping in time to music, we formulate our approach using a two state model. The first state performs tempo induction and tracks tempo changes, while the second maintains contextual continuity within a single tempo hypothesis. Beat times are recovered by passing the output of an onset detection function through adaptively weighted comb filterbank matrices to separately identify the beat period and alignment. We evaluate our beat tracker both in terms of the accuracy of estimated beat locations and computational complexity. In a direct comparison with existing algorithms, we demonstrate equivalent performance at significantly reduced computational cost.
2011
Authors
Davies, MEP; Degara, N; Plumbley, MD;
Publication
IEEE SIGNAL PROCESSING LETTERS
Abstract
We present a new evaluation method for measuring the performance of musical audio beat tracking systems. Central to our method is a novel visualization, the beat error histogram, which illustrates the metrical relationship between two qausi-periodic sequences of time instants: the output of beat tracking system and a set of ground truth annotations. To quantify beat tracking performance we derive an information theoretic statistic from the histogram. Results indicate that our method is able to measure performance with greater precision than existing evaluation methods and implicitly cater for metrical ambiguity in tapping sequences.
2011
Authors
Degara, N; Davies, MEP; Pena, A; Plumbley, MD;
Publication
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING
Abstract
In this paper, we propose a rhythmically informed method for onset detection in polyphonic music. Music is highly structured in terms of the temporal regularity underlying onset occurrences and this rhythmic structure can be used to locate sound events. Using a probabilistic formulation, the method integrates information extracted from the audio signal and rhythmic knowledge derived from tempo estimates in order to exploit the temporal expectations associated with rhythm and make musically meaningful event detections. To do so, the system explicitly models note events in terms of the elapsed time between consecutive events and decodes the most likely sequence of onsets that led to the observed audio signal. In this way, the proposed method is able to identify likely time instants for onsets and to successfully exploit the temporal regularity of music. The goal of this work is to define a general framework to be used in combination with any onset detection function and tempo estimator. The method is evaluated using a dataset of music that contains multiple instruments playing at the same time, including singing and different music genres. Results show that the use of rhythmic information improves the commonly used adaptive thresholding onset detection method which only considers local information. It is also shown that the proposed probabilistic framework successfully exploits rhythmic information using different detection functions and tempo estimation algorithms.
2012
Authors
Holzapfel, A; Davies, MEP; Zapata, JR; Oliveira, JL; Gouyon, F;
Publication
2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Abstract
In this paper, an approach is presented that identifies music samples which are difficult for current state-of-the-art beat trackers. In order to estimate this difficulty even for examples without ground truth, a method motivated by selective sampling is applied. This method assigns a degree of difficulty to a sample based on the mutual disagreement between the output of various beat tracking systems. On a large beat annotated dataset we show that this mutual agreement is correlated with the mean performance of the beat trackers evaluated against the ground truth, and hence can be used to identify difficult examples by predicting poor beat tracking performance. Towards the aim of advancing future beat tracking systems, we demonstrate how our method can be used to form new datasets containing a high proportion of challenging music examples.
2010
Authors
Degara, N; Pena, A; Davies, MEP; Plumbley, MD;
Publication
2010 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING
Abstract
In this paper we explore the relationship between the temporal and rhythmic structure of musical audio signals. Using automatically extracted rhythmic structure we present a rhythmically-aware method to combine note onset detection techniques. Our method uses top-down knowledge of repetitions of musical events to improve detection performance by modelling the temporal distribution of onset locations. Results on a publicly available database demonstrate that using musical knowledge in this way can lead to significant improvements by reducing the number of missed and spurious detections.
2007
Authors
Davies, MEP; Plumbley, MD;
Publication
2007 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol IV, Pts 1-3
Abstract
Despite continued attention toward the problem of automatic beat detection in musical audio, the issue of how to evaluate beat tracking systems remains pertinent and controversial. As yet no consistent evaluation metric has been adopted by the research community. To this aim, we propose a new method for beat tracking evaluation by measuring beat accuracy in terms of the entropy of a beat error histogram. We demonstrate the ability of our approach to address several shortcomings of existing methods.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.