2025
Authors
Cerqueira, V; Roque, L; Soares, C;
Publication
MACHINE LEARNING
Abstract
Accurate evaluation of forecasting models is essential for ensuring reliable predictions. Current practices for evaluating and comparing forecasting models focus on summarising performance into a single score, using metrics such as SMAPE. While convenient, averaging performance over all samples dilutes relevant information about model behaviour under varying conditions. This limitation is especially problematic for time series forecasting, where multiple layers of averaging-across time steps, horizons, and multiple time series in a dataset-can mask relevant performance variations. We address this limitation by proposing ModelRadar, a framework for evaluating univariate time series forecasting models across multiple aspects, such as stationarity, presence of anomalies, or forecasting horizons. We demonstrate the advantages of this framework by comparing 24 forecasting methods, including classical approaches and different machine learning algorithms. PatchTST, a state-of-the-art transformer-based neural network architecture, performs best overall but its superiority varies with forecasting conditions. For instance, concerning the forecasting horizon, we found that PatchTST (and also other neural networks) only outperforms classical approaches for multi-step ahead forecasting. Another relevant insight is that classical approaches such as ETS or Theta are notably more robust in the presence of anomalies. These and other findings highlight the importance of aspect-based model evaluation for both practitioners and researchers. ModelRadar is available as a Python package.
2025
Authors
Roque, L; Soares, C; Cerqueira, V; Torgo, L;
Publication
CoRR
Abstract
2025
Authors
Inácio, R; Kokkinogenis, Z; Cerqueira, V; Soares, C;
Publication
CoRR
Abstract
2025
Authors
Tuna, R; Soares, C;
Publication
CoRR
Abstract
2025
Authors
Liguori, A; Caroprese, L; Minici, M; Veloso, B; Spinnato, F; Nanni, M; Manco, G; Gama, J;
Publication
NEUROCOMPUTING
Abstract
In real-world scenarios, numerous phenomena generate a series of events that occur in continuous time. Point processes provide a natural mathematical framework for modeling these event sequences. In this comprehensive survey, we aim to explore probabilistic models that capture the dynamics of event sequences through temporal processes. We revise the notion of event modeling and provide the mathematical foundations that underpin the existing literature on this topic. To structure our survey effectively, we introduce an ontology that categorizes the existing approaches considering three horizontal axes: modeling, inference and estimation, and application. We conduct a systematic review of the existing approaches, with a particular focus on those leveraging deep learning techniques. Finally, we delve into the practical applications where these proposed techniques can be harnessed to address real-world problems related to event modeling. Additionally, we provide a selection of benchmark datasets that can be employed to validate the approaches for point processes.
2025
Authors
Salazar, T; Gama, J; Araújo, H; Abreu, PH;
Publication
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Abstract
In the evolving field of machine learning, ensuring group fairness has become a critical concern, prompting the development of algorithms designed to mitigate bias in decision-making processes. Group fairness refers to the principle that a model's decisions should be equitable across different groups defined by sensitive attributes such as gender or race, ensuring that individuals from privileged groups and unprivileged groups are treated fairly and receive similar outcomes. However, achieving fairness in the presence of group-specific concept drift remains an unexplored frontier, and our research represents pioneering efforts in this regard. Group-specific concept drift refers to situations where one group experiences concept drift over time, while another does not, leading to a decrease in fairness even if accuracy (ACC) remains fairly stable. Within the framework of federated learning (FL), where clients collaboratively train models, its distributed nature further amplifies these challenges since each client can experience group-specific concept drift independently while still sharing the same underlying concept, creating a complex and dynamic environment for maintaining fairness. The most significant contribution of our research is the formalization and introduction of the problem of group-specific concept drift and its distributed counterpart, shedding light on its critical importance in the field of fairness. In addition, leveraging insights from prior research, we adapt an existing distributed concept drift adaptation algorithm to tackle group-specific distributed concept drift, which uses a multimodel approach, a local group-specific drift detection mechanism, and continuous clustering of models over time. The findings from our experiments highlight the importance of addressing group-specific concept drift and its distributed counterpart to advance fairness in machine learning.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.