2024
Autores
Rodrigues, EM; Baghoussi, Y; Mendes-Moreira, J;
Publicação
EXPERT SYSTEMS
Abstract
Deep learning models are widely used in multivariate time series forecasting, yet, they have high computational costs. One way to reduce this cost is by reducing data dimensionality, which involves removing unimportant or low importance information with the proper method. This work presents a study on an explainability feature selection framework composed of four methods (IMV-LSTM Tensor, LIME-LSTM, Average SHAP-LSTM, and Instance SHAP-LSTM) aimed at using the LSTM black-box model complexity to its favour, with the end goal of improving the error metrics and reducing the computational cost on a forecast task. To test the framework, three datasets with a total of 101 multivariate time series were used, with the explainability methods outperforming the baseline methods in most of the data, be it in error metrics or computation time for the LSTM model training.
2024
Autores
Mazarei, A; Sousa, R; Mendes-Moreira, J; Molchanov, S; Ferreira, HM;
Publicação
INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS
Abstract
Outlier detection is a widely used technique for identifying anomalous or exceptional events across various contexts. It has proven to be valuable in applications like fault detection, fraud detection, and real-time monitoring systems. Detecting outliers in real time is crucial in several industries, such as financial fraud detection and quality control in manufacturing processes. In the context of big data, the amount of data generated is enormous, and traditional batch mode methods are not practical since the entire dataset is not available. The limited computational resources further compound this issue. Boxplot is a widely used batch mode algorithm for outlier detection that involves several derivations. However, the lack of an incremental closed form for statistical calculations during boxplot construction poses considerable challenges for its application within the realm of big data. We propose an incremental/online version of the boxplot algorithm to address these challenges. Our proposed algorithm is based on an approximation approach that involves numerical integration of the histogram and calculation of the cumulative distribution function. This approach is independent of the dataset's distribution, making it effective for all types of distributions, whether skewed or not. To assess the efficacy of the proposed algorithm, we conducted tests using simulated datasets featuring varying degrees of skewness. Additionally, we applied the algorithm to a real-world dataset concerning software fault detection, which posed a considerable challenge. The experimental results underscored the robust performance of our proposed algorithm, highlighting its efficacy comparable to batch mode methods that access the entire dataset. Our online boxplot method, leveraging dataset distribution to define whiskers, consistently achieved exceptional outlier detection results. Notably, our algorithm demonstrated computational efficiency, maintaining constant memory usage with minimal hyperparameter tuning.
2024
Autores
Baldo, A; Ferreira, PJS; Mendes-Moreira, J;
Publicação
EXPERT SYSTEMS
Abstract
With technological advancements, much data is being captured by sensors, smartphones, wearable devices, and so forth. These vast datasets are stored in data centres and utilized to forge data-driven models for the condition monitoring of infrastructures and systems through future data mining tasks. However, these datasets often surpass the processing capabilities of traditional information systems and methodologies due to their significant size. Additionally, not all samples within these datasets contribute valuable information during the model training phase, leading to inefficiencies. The processing and training of Machine Learning algorithms become time-consuming, and storing all the data demands excessive space, contributing to the Big Data challenge. In this paper, we propose two novel techniques to reduce large time-series datasets into more compact versions without undermining the predictive performance of the resulting models. These methods also aim to decrease the time required for training the models and the storage space needed for the condensed datasets. We evaluated our techniques on five public datasets, employing three Machine Learning algorithms: Holt-Winters, SARIMA, and LSTM. The outcomes indicate that for most of the datasets examined, our techniques maintain, and in several instances enhance, the forecasting accuracy of the models. Moreover, we significantly reduced the time required to train the Machine Learning algorithms employed.
2024
Autores
Baghoussi, Y; Soares, C; Moreira, JM;
Publicação
Neural Comput. Appl.
Abstract
Traditional recurrent neural networks (RNNs) are essential for processing time-series data. However, they function as read-only models, lacking the ability to directly modify the data they learn from. In this study, we introduce the corrector long short-term memory (cLSTM), a Read & Write LSTM architecture that not only learns from the data but also dynamically adjusts it when necessary. The cLSTM model leverages two key components: (a) predicting LSTM’s cell states using Seasonal Autoregressive Integrated Moving Average (SARIMA) and (b) refining the training data based on discrepancies between actual and forecasted cell states. Our empirical validation demonstrates that cLSTM surpasses read-only LSTM models in forecasting accuracy across the Numenta Anomaly Benchmark (NAB) and M4 Competition datasets. Additionally, cLSTM exhibits superior performance in anomaly detection compared to hierarchical temporal memory (HTM) models. © The Author(s) 2024.
2024
Autores
---, MP; Mendes-Moreira, J;
Publicação
Abstract
2024
Autores
Kumar, R; Bhanu, M; Mendes Moreira, J; Chandra, J;
Publicação
ACM Computing Surveys
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.