2003
Authors
Castillo, G; Gama, J; Medas, P;
Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE
Abstract
Most of supervised learning algorithms assume the stability of the target concept over time. Nevertheless in many real-user modeling systems, where the data is collected over an extended period of time, the learning task can be complicated by changes in the distribution underlying the data. This problem is known in machine learning as concept drift. The main idea behind Statistical Quality Control is to monitor the stability of one or more quality characteristics in a production process which generally shows some variation over time. In this paper we present a method for handling concept drift based on Shewhart P-Charts in an on-line framework for supervised learning. We explore the use of two alternatives P-charts, which differ only by the way they estimate the target value to set the center line. Experiments with simulated concept drift scenarios in the context of a user modeling prediction task compare the proposed method with other adaptive approaches. The results show that, both P-Charts consistently recognize concept changes, and that the learner can adapt quickly to these changes to maintain its performance level.
2007
Authors
Gama, J; Pedersen, RU;
Publication
Learning from Data Streams: Processing Techniques in Sensor Networks
Abstract
Sensor networks act in dynamic environments with distributed sources of continuous data and computing with resource constraints. Learning in these environments is faced with new challenges: the need to continuously maintain a decision model consistent with the most recent data. Desirable properties of learning algorithms include: the ability to maintain an any time model; the ability to modify the decision model whenever new information is available; the ability to forget outdated information; and the ability to detect and react to changes in the underlying process generating data, monitoring the learning process and managing the trade-off between the cost of updating a model and the benefits in performance gains. In this chapter we illustrate these ideas in two learning scenarios - centralized and distributed - and present illustrative algorithms for these contexts. © 2007 Springer-Verlag Berlin Heidelberg.
1994
Authors
Brazdil, P; Gama, J; Henery, B;
Publication
Machine Learning: ECML-94, European Conference on Machine Learning, Catania, Italy, April 6-8, 1994, Proceedings
Abstract
2003
Authors
Castillo, G; Gama, J; Breda, AM;
Publication
USER MODELING 2003, PROCEEDINGS
Abstract
We present Adaptive Bayes, an adaptive incremental version of Naive Bayes, to model a prediction task based on learning styles in the context of an Adaptive Hypermedia Educational System. Since the student's preferences can change over time, this task is related to a problem known as concept drift in the machine learning community. For this class of problems an adaptive predictive model, able to adapt quickly to the user's changes, is desirable. The results from conducted experiments show that Adaptive Bayes seems to be a fine and simple choice for this kind of prediction task in user modeling.
2000
Authors
Gama, J;
Publication
ADVANCES IN ARTIFICIAL INTELLIGENCE
Abstract
Naive Bayes is a well known and studied algorithm both in statistics and machine learning. Although its limitations with respect to expressive power, this procedure has a surprisingly good performance in a wide variety of domains, including many where there are clear dependencies between attributes. In this paper we address its main perceived limitation - its inability to deal with attribute dependencies. We present Linear Bayes that uses, for the continuous attributes, a multivariate normal distribution to compute the require probabilities. In this way, the interdependencies between the continuous attributes are considered. On the empirical evaluation, we compare Linear Bayes against a naive-Bayes that discretize continuous attributes, a naive-Bayes that assumes a univariate Gaussian for continuous attributes, and a standard Linear discriminant function. We show that Linear Bayes is a plausible algorithm, that competes quite well against other well established techniques.
2002
Authors
Gama, J; Castillo, G;
Publication
ADVANCES IN ARTIFICIAL INTELLIGENCE - IBERAMIA 2002, PROCEEDINGS
Abstract
Several researchers have studied the application of Machine Learning techniques to the task of user modeling. As most of them pointed out, this task requires learning algorithms that should work on-line, incorporate new information incrementality, and should exhibit the capacity to deal with concept-drift. In this paper we present Adaptive Bayes, an extension to the well-known naive-Bayes, one of the most common used learning algorithms for the task of user modeling. Adaptive Bayes is an incremental learning algorithm that could work on-line. We have evaluated Adaptive Bayes on both frameworks. Using a set of benchmark problems from the UCI repository [2], and using several evaluation statistics, all the adaptive systems show significant advantages in comparison against their non-adaptive versions.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.