2006
Authors
Soares, C; Brazdil, PB;
Publication
Proceedings of the ACM Symposium on Applied Computing
Abstract
The Support Vector Machine (SVM) algorithm is sensitive to the choice of parameter settings, which makes it hard to use by non-experts. It has been shown that meta-learning can be used to support the selection of SVM parameter values. Previous approaches have used general statistical measures as meta-features. Here we propose a new set of meta-features that are based on the kernel matrix. We test them on the problem of setting the width of the Gaussian kernel for regression problems. We obtain significant improvements in comparison to earlier meta-learning results. We expect that with better support in the selection of parameter values, SVM becomes accessible to a wider range of users. Copyright 2006 ACM.
2001
Authors
Soares, C; Petrak, J; Brazdil, P;
Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
When facing the need to select the most appropriate algorithm to apply on a new data set, data analysts often follow an approach which can be related to test-driving cars to decide which one to buy: apply the algorithms on a sample of the data to quickly obtain rough estimates of their performance. These estimates are used to select one or a few of those algorithms to be tried out on the full data set. We describe sampling-based landmarks (SL), a systematization of this approach, building on earlier work on landmarking and sampling. SL are estimates of the performance of algorithms on a small sample of the data that are used as predictors of the performance of those algorithms on the full set. We also describe relative landmarks (RL), that address the inability of earlier landmarks to assess relative performance of algorithms. RL aggregate landmarks to obtain predictors of relative performance. Our experiments indicate that the combination of these two improvements, which we call Sampling-based Relative Landmarks, are better for ranking than traditional data characterization measures. © Springer-Verlag Berlin Heidelberg 2001.
2002
Authors
Soares, C; Brazdil, P;
Publication
ADVANCES IN ARTIFICIAL INTELLIGENCE - IBERAMIA 2002, PROCEEDINGS
Abstract
Cross-validation (CV) is the most accurate method available for algorithm recommendation but it is rather slow. We show that information about the past performance of algorithms can be used for the same purpose with small loss in accuracy and significant savings in experimentation time. We use a meta-learning framework that combines a simple IBL algorithm with a ranking method. We show that results improve significantly by using a set of selected measures that represent data characteristics that permit to predict algorithm performance. Our results also indicate that the choice of ranking method as a smaller effect on the quality of recommendations. Finally, we present situations that illustrate the advantage of providing recommendation as a ranking of the candidate algorithms, rather than as the single algorithm which is expected to perform best.
2004
Authors
Soares, C; Brazdil, PB; Kuba, P;
Publication
MACHINE LEARNING
Abstract
The Support Vector Machine algorithm is sensitive to the choice of parameter settings. If these are not set correctly, the algorithm may have a substandard performance. Suggesting a good setting is thus an important problem. We propose a meta-learning methodology for this purpose and exploit information about the past performance of different settings. The methodology is applied to set the width of the Gaussian kernel. We carry out an extensive empirical evaluation, including comparisons with other methods (fixed default ranking; selection based on cross-validation and a heuristic method commonly used to set the width of the SVM kernel). We show that our methodology can select settings with low error while providing significant savings in time. Further work should be carried out to see how the methodology could be adapted to different parameter setting tasks.
2001
Authors
Brazdil, P; Soares, C; Pereira, R;
Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
Several methods have been proposed to generate rankings of supervised classification algorithms based on their previous performance on other datasets [8,4]. Like any other prediction method, ranking methods will sometimes err, for instance, they may not rank the best algorithm in the first position. Often the user is willing to try more than one algorithm to increase the possibility of identifying the best one. The information provided in the ranking methods mentioned is not quite adequate for this purpose. That is, they do not identify those algorithms in the ranking that have reasonable possibility of performing best. In this paper, we describe a method for that purpose. We compare our method to the strategy of executing all algorithms and to a very simple reduction method, consisting of running the top three algorithms. In all this work we take time as well as accuracy into account. As expected, our method performs better than the simple reduction method and shows a more stable behavior than running all algorithms. © Springer-Verlag Berlin Heidelberg 2001.
2009
Authors
Carrier, CGG; Brazdil, P; Soares, C; Vilalta, R;
Publication
Encyclopedia of Data Warehousing and Mining, Second Edition (4 Volumes)
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.