Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Carlos Manuel Soares

2011

Uncertainty Sampling-Based Active Selection of Datasetoids for Meta-learning

Autores
Prudencio, RBC; Soares, C; Ludermir, TB;

Publicação
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2011, PT II

Abstract
Several meta-learning approaches have been developed for the problem of algorithm selection. In this context, it is of central importance to collect a sufficient number of datasets to be used as meta-examples in order to provide reliable results. Recently, some proposals to generate datasets have addressed this issue with successful results. These proposals include datasetoids, which is a simple manipulation method to obtain new datasets from existing ones. However, the increase in the number of datasets raises another issue: in order to generate meta-examples for training, it is necessary to estimate the performance of the algorithms on the datasets. This typically requires running all candidate algorithms on all datasets, which is computationally very expensive. One approach to address this problem is the use of an active learning approach to meta-learning, termed active meta-learning. In this paper we investigate the combined use of an active meta-learning approach based on an uncertainty score and datasetoids. Based on our results, we conclude that the accuracy of our method is very good results with as little as 10% to 20% of the meta-examples labeled.

2010

Inductive Transfer

Autores
Utgoff, PE; Cussens, J; Kramer, S; Jain, S; Stephan, F; Raedt, LD; Todorovski, L; Flener, P; Schmid, U; Vilalta, R; Giraud-Carrier, C; Brazdil, P; Soares, C; Keogh, E; Smart, WD; Abbeel, P; Ng, AY;

Publicação
Encyclopedia of Machine Learning

Abstract

2010

Metalearning

Autores
Fürnkranz, J; Chan, PK; Craw, S; Sammut, C; Uther, W; Ratnaparkhi, A; Jin, X; Han, J; Yang, Y; Morik, K; Dorigo, M; Birattari, M; Stützle, T; Brazdil, P; Vilalta, R; Giraud-Carrier, C; Soares, C; Rissanen, J; Baxter, RA; Bruha, I; Baxter, RA; Webb, GI; Torgo, L; Banerjee, A; Shan, H; Ray, S; Tadepalli, P; Shoham, Y; Powers, R; Shoham, Y; Powers, R; Webb, GI; Ray, S; Scott, S; Blockeel, H; De Raedt, L;

Publicação
Encyclopedia of Machine Learning

Abstract

2002

Improved dataset characterisation for meta-learning

Autores
Peng, YH; Flach, PA; Soares, C; Brazdil, P;

Publicação
DISCOVERY SCIENCE, PROCEEDINGS

Abstract
This paper presents new measures, based on the induced decision tree, to characterise datasets for meta-learning in order to select appropriate learning algorithms. The main idea is to capture the characteristics of dataset from the structural shape and size of decision tree induced from the dataset. Totally 15 measures are proposed to describe the structure of a decision tree. Their effectiveness is illustrated through extensive experiments, by comparing to the results obtained by the existing data characteristics techniques, including data characteristics tool (DCT) that is the most wide used technique in meta-learning, and Landmarking that is the most recently developed method.

2006

Selecting parameters of SVM using meta-learning and kernel matrix-based meta-features

Autores
Soares, C; Brazdil, PB;

Publicação
Proceedings of the ACM Symposium on Applied Computing

Abstract
The Support Vector Machine (SVM) algorithm is sensitive to the choice of parameter settings, which makes it hard to use by non-experts. It has been shown that meta-learning can be used to support the selection of SVM parameter values. Previous approaches have used general statistical measures as meta-features. Here we propose a new set of meta-features that are based on the kernel matrix. We test them on the problem of setting the width of the Gaussian kernel for regression problems. We obtain significant improvements in comparison to earlier meta-learning results. We expect that with better support in the selection of parameter values, SVM becomes accessible to a wider range of users. Copyright 2006 ACM.

2011

Uncertainty Sampling Methods for Selecting Datasets in Active Meta-Learning

Autores
Prudencio, RBC; Soares, C; Ludermir, TB;

Publicação
2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)

Abstract
Several meta-learning approaches have been developed for the problem of algorithm selection. In this context, it is of central importance to collect a sufficient number of datasets to be used as meta-examples in order to provide reliable results. Recently, some proposals to generate datasets have addressed this issue with successful results. These proposals include datasetoids, which is a simple manipulation method to obtain new datasets from existing ones. However, the increase in the number of datasets raises another issue: in order to generate meta-examples for training, it is necessary to estimate the performance of the algorithms on the datasets. This typically requires running all candidate algorithms on all datasets, which is computationally very expensive. In a recent paper, active meta-learning has been used to address this problem. An uncertainty sampling method for the k-NN algorithm using a least confidence score based on a distance measure was employed. Here we extend that work, namely by investigating three hypotheses: 1) is there advantage in using a frequency-based least confidence score over the distance-based score? 2) given that the meta-learning problem used has three classes, is it better to use a margin-based score? and 3) given that datasetoids are expected to contain some noise, are better results achieved by starting the search with all datasets already labeled? Some of the results obtained are unexpected and should be further analyzed. However, they confirm that active meta-learning can significantly reduce the computational cost of meta-learning with potential gains in accuracy.

  • 25
  • 41