2010
Autores
Sebastiao, R; Gama, J; Rodrigues, PP; Bernardes, J;
Publicação
KNOWLEDGE DISCOVERY FROM SENSOR DATA
Abstract
Histograms are a common technique for density estimation and they have been widely used as a tool in exploratory data analysis. Learning histograms from static and stationary data is a well known topic. Nevertheless, very few works discuss this problem when we have a continuous flow of data generated from dynamic environments. The scope of this paper is to detect changes from high-speed time-changing data streams. To address this problem, we construct histograms able to process examples once at the rate they arrive. The main goal of this work is continuously maintain a histogram consistent with the current status of the nature. We study strategies to detect changes in the distribution generating examples, and adapt the histogram to the most recent data by forgetting outdated data. We use the Partition Incremental Discretization algorithm that was designed to learn histograms from high-speed data streams. We present a method to detect whenever a change in the distribution generating examples occurs. The base idea consists of monitoring distributions from two different time windows: the reference window, reflecting the distribution observed in the past; and the current window which receives the most recent data. The current window is cumulative and can have a fixed or an adaptive step depending on the distance between distributions. We compared both distributions using Kullback-Leibler divergence, defining a threshold for change detection decision based on the asymmetry of this measure. We evaluated our algorithm with controlled artificial data sets and compare the proposed approach with nonparametric tests. We also present results with real word data sets from industrial and medical domains. Those results suggest that an adaptive window's step exhibit high probability in change detection and faster detection rates, with few false positives alarms.
2011
Autores
Bosnic, Z; Rodrigues, PP; Kononenko, I; Gama, J;
Publicação
Advances in Intelligent and Soft Computing
Abstract
Accurately predicting values for dynamic data streams is a challenging task in decision and expert systems, due to high data flow rates, limited storage and a requirement to quickly adapt a model to new data. We propose an approach for correcting predictions for data streams which is based on a reliability estimate for individual regression predictions. In our work, we implement the proposed technique and test it on a real-world problem: prediction of the electricity load for a selected European geographical region. For predicting the electricity load values we implement two regression models: the neural network and the k nearest neighbors algorithm. The results show that our method performs better than the referential method (i.e. the Kalman filter), significantly improving the original streaming predictions to more accurate values. © 2011 Springer-Verlag Berlin Heidelberg.
2011
Autores
Gama, J; Carvalho, A; Krishnaswamy, S; Rodrigues, PP;
Publicação
Proceedings of the ACM Symposium on Applied Computing
Abstract
2004
Autores
Gama, J; Medas, P; Castillo, G; Rodrigues, P;
Publicação
ADVANCES IN ARTIFICIAL INTELLIGENCE - SBIA 2004
Abstract
Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error will decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example k(w), and the drift level at example k(d). This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since k(w). The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and with learning the new concept. We also observe that the method is independent of the learning algorithm.
2005
Autores
Gama, J; Medas, P; Rodrigues, P;
Publicação
Proceedings of the ACM Symposium on Applied Computing
Abstract
This paper presents a system for induction of forest of functional trees from data streams able to detect concept drift. The Ultra Fast Forest of Trees (UFFT) is an incremental algorithm, that works online, processing each example in constant time, and performing a single scan over the training examples. It uses analytical techniques to choose the splitting criteria, and the information gain to estimate the merit of each possible splitting-test. For multi-class problems the algorithm grows a binary tree for each possible pair of classes, leading to a forest of trees. Decision nodes and leaves contain naive-Bayes classifiers playing different roles during the induction process. Naive-Bayes in leaves are used to classify test examples, naive-Bayes in inner nodes can be used as multivariate splitting-tests if chosen by the splitting criteria, and used to detect drift in the distribution of the examples that traverse the node. When a drift is detected, all the sub-tree rooted at that node will be pruned. The use of naive-Bayes classifiers at leaves to classify test examples, the use of splitting-tests based on the outcome of naive-Bayes, and the use of naive-Bayes classifiers at decision nodes to detect drift are directly obtained from the sufficient statistics required to compute the splitting criteria, without no additional computations. This aspect is a main advantage in the context of high-speed data streams. This methodology was tested with artificial and real-world data sets. The experimental results show a very good performance in comparison to a batch decision tree learner, and high capacity to detect and react to drift. Copyright 2005 ACM.
2009
Autores
Silva, MM; Sousa, C; Sebastiao, R; Gama, J; Mendonca, T; Rocha, P; Esteves, S;
Publicação
MED: 2009 17TH MEDITERRANEAN CONFERENCE ON CONTROL & AUTOMATION, VOLS 1-3
Abstract
This paper presents the Total Mass Target Controlled Infusion algorithm. The system comprises an On Line tuned Algorithm for Recovery Detection (OLARD) after an initial bolus administration and a Bayesian identification method for parametric estimation based on sparse measurements of the accessible signal. To design the drug dosage profile, two algorithms are here proposed. During the transient phase, an Input Variance Control (IVC) algorithm is used. It is based on the concept of TCI and aims to steer the drug effect to a predefined target value within an a priori fixed interval of time. After the steady state phase is reached the drug dose regimen is controlled by a Total Mass Control (TMC) algorithm. The mass control law for compartmental systems is robust even in the presence of parameter uncertainties. The whole system feasibility has been evaluated for the case of Neuromuscular Blockade (NMB) level and was tested both in simulation and in real cases.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.