2023
Autores
Cerqueira, V; Torgo, L; Soares, C;
Publicação
NEURAL PROCESSING LETTERS
Abstract
Evaluating predictive models is a crucial task in predictive analytics. This process is especially challenging with time series data because observations are not independent. Several studies have analyzed how different performance estimation methods compare with each other for approximating the true loss incurred by a given forecasting model. However, these studies do not address how the estimators behave for model selection: the ability to select the best solution among a set of alternatives. This paper addresses this issue. The goal of this work is to compare a set of estimation methods for model selection in time series forecasting tasks. This objective is split into two main questions: (i) analyze how often a given estimation method selects the best possible model; and (ii) analyze what is the performance loss when the best model is not selected. Experiments were carried out using a case study that contains 3111 time series. The accuracy of the estimators for selecting the best solution is low, despite being significantly better than random selection. Moreover, the overall forecasting performance loss associated with the model selection process ranges from 0.28 to 0.58%. Yet, no considerable differences between different approaches were found. Besides, the sample size of the time series is an important factor in the relative performance of the estimators.
2023
Autores
Cerqueira, V; Torgo, L; Soares, C;
Publicação
MACHINE LEARNING
Abstract
The early detection of anomalous events in time series data is essential in many domains of application. In this paper we deal with critical health events, which represent a significant cause of mortality in intensive care units of hospitals. The timely prediction of these events is crucial for mitigating their consequences and improving healthcare. One of the most common approaches to tackle early anomaly detection problems is through standard classification methods. In this paper we propose a novel method that uses a layered learning architecture to address these tasks. One key contribution of our work is the idea of pre-conditional events, which denote arbitrary but computable relaxed versions of the event of interest. We leverage this idea to break the original problem into two hierarchical layers, which we hypothesize are easier to solve. The results suggest that the proposed approach leads to a better performance relative to state of the art approaches for critical health episode prediction.
2023
Autores
Freitas, F; Brazdil, P; Soares, C;
Publicação
Discovery Science - 26th International Conference, DS 2023, Porto, Portugal, October 9-11, 2023, Proceedings
Abstract
Many current AutoML platforms include a very large space of alternatives (the configuration space) that make it difficult to identify the best alternative for a given dataset. In this paper we explore a method that can reduce a large configuration space to a significantly smaller one and so help to reduce the search time for the potentially best workflow. We empirically validate the method on a set of workflows that include four ML algorithms (SVM, RF, LogR and LD) with different sets of hyperparameters. Our results show that it is possible to reduce the given space by more than one order of magnitude, from a few thousands to tens of workflows, while the risk that the best workflow is eliminated is nearly zero. The system after reduction is about one order of magnitude faster than the original one, but still maintains the same predictive accuracy and loss. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
2023
Autores
Baptista, T; Soares, C; Oliveira, T; Soares, F;
Publicação
APPLIED SCIENCES-BASEL
Abstract
Deep learning approaches require a large amount of data to be transferred to centralized entities. However, this is often not a feasible option in healthcare, as it raises privacy concerns over sharing sensitive information. Federated Learning (FL) aims to address this issue by allowing machine learning without transferring the data to a centralized entity. FL has shown great potential to ensure privacy in digital healthcare while maintaining performance. Despite this, there is a lack of research on the impact of different types of data heterogeneity on the results. In this study, we research the robustness of various FL strategies on different data distributions and data quality for glaucoma diagnosis using retinal fundus images. We use RetinaQualEvaluator to generate quality labels for the datasets and then a data distributor to achieve our desired distributions. Finally, we evaluate the performance of the different strategies on local data and an independent test dataset. We observe that federated learning shows the potential to enable high-performance models without compromising sensitive data. Furthermore, we infer that FedProx is more suitable to scenarios where the distributions and quality of the data of the participating clients is diverse with less communication cost.
2023
Autores
Baghcheband, H; Soares, C; Reis, LP;
Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT I
Abstract
In recent years, the increasing availability of distributed data has led to a growing interest in transfer learning across multiple nodes. However, local data may not be adequate to learn sufficiently accurate models, and the problem of learning from multiple distributed sources remains a challenge. To address this issue, Machine Learning Data Markets (MLDM) have been proposed as a potential solution. In MLDM, autonomous agents exchange relevant data in a cooperative relationship to improve their models. Previous research has shown that data exchange can lead to better models, but this has only been demonstrated with only two agents. In this paper, we present an extended evaluation of a simple version of the MLDM framework in a collaborative scenario. Our experiments show that data exchange has the potential to improve learning performance, even in a simple version of MLDM. The findings conclude that there exists a direct correlation between the number of agents and the gained performance, while an inverse correlation was observed between the performance and the data batch sizes. The results of this study provide important insights into the effectiveness of MLDM and how it can be used to improve learning performance in distributed systems. By increasing the number of agents, a more efficient system can be achieved, while larger data batch sizes can decrease the global performance of the system. These observations highlight the importance of considering both the number of agents and the data batch sizes when designing distributed learning systems using the MLDM framework.
2023
Autores
dos Santos, MR; de Carvalho, ACPLF; Soares, C;
Publicação
CoRR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.