Detalhes
Nome
Rita Paula RibeiroCargo
Investigador SéniorDesde
01 janeiro 2008
Nacionalidade
PortugalCentro
Laboratório de Inteligência Artificial e Apoio à DecisãoContactos
+351220402963
rita.p.ribeiro@inesctec.pt
2024
Autores
Davari, N; Veloso, B; Ribeiro, RP; Gama, J;
Publicação
39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024
Abstract
Predictive maintenance methods play a crucial role in the early detection of failures and errors in machinery, preventing them from reaching critical stages. This paper presents a comprehensive study on a real-world dataset called MetroPT3, with data from a Metro do Porto train's air production unit (APU) system. The dataset comprises data collected from various analogue and digital sensors installed on the APU system, enabling the analysis of behavioural changes and deviations from normal patterns. We propose a data-driven predictive maintenance framework based on a Long Short-Term Memory Autoencoder (LSTM-AE) network. The LSTM-AE efficiently identifies abnormal data instances, leading to a reduction in false alarm rates. We also implement a Sparse Autoencoder (SAE) approach for comparative analysis. The experimental results demonstrate that the LSTM-AE outperforms the SAE regarding F1 Score, Recall, and Precision. Furthermore, to gain insights into the reasons for anomaly detection, we apply the Shap method to determine the importance of features in the predictive maintenance model. This approach enhances the interpretability of the model to support the decision-making process better.
2024
Autores
Molina, M; Ribeiro, RP; Veloso, B; Carna, J;
Publicação
ADVANCES IN INTELLIGENT DATA ANALYSIS XXII, PT I, IDA 2024
Abstract
Illegal landfills are a critical issue due to their environmental, economic, and public health impacts. This study leverages aerial imagery for environmental crime monitoring. While advances in artificial intelligence and computer vision hold promise, the challenge lies in training models with high-resolution literature datasets and adapting them to open-access low-resolution images. Considering the substantial quality differences and limited annotation, this research explores the adaptability of models across these domains. Motivated by the necessity for a comprehensive evaluation of waste detection algorithms, it advocates cross-domain classification and super-resolution enhancement to analyze the impact of different image resolutions on waste classification as an evaluation to combat the proliferation of illegal landfills. We observed performance improvements by enhancing image quality but noted an influence on model sensitivity, necessitating careful threshold fine-tuning.
2024
Autores
Gama, J; Ribeiro, RP; Mastelini, S; Davari, N; Veloso, B;
Publicação
JOURNAL OF WEB SEMANTICS
Abstract
Predictive Maintenance applications are increasingly complex, with interactions between many components. Black -box models are popular approaches based on deep -learning techniques due to their predictive accuracy. This paper proposes a neural -symbolic architecture that uses an online rule -learning algorithm to explain when the black -box model predicts failures. The proposed system solves two problems in parallel: (i) anomaly detection and (ii) explanation of the anomaly. For the first problem, we use an unsupervised state-of-the-art autoencoder. For the second problem, we train a rule learning system that learns a mapping from the input features to the autoencoder's reconstruction error. Both systems run online and in parallel. The autoencoder signals an alarm for the examples with a reconstruction error that exceeds a threshold. The causes of the signal alarm are hard for humans to understand because they result from a non-linear combination of sensor data. The rule that triggers that example describes the relationship between the input features and the autoencoder's reconstruction error. The rule explains the failure signal by indicating which sensors contribute to the alarm and allowing the identification of the component involved in the failure. The system can present global explanations for the black box model and local explanations for why the black box model predicts a failure. We evaluate the proposed system in a real -world case study of Metro do Porto and provide explanations that illustrate its benefits.
2024
Autores
Andrade, C; Ribeiro, RP; Gama, J;
Publicação
ADVANCES IN ARTIFICIAL INTELLIGENCE, CAEPIA 2024
Abstract
E-commerce has become an essential aspect of modern life, providing consumers globally with convenience and accessibility. However, the high volume of short and noisy product descriptions in text streams of massive e-commerce platforms translates into an increased number of clusters, presenting challenges for standard model-based stream clustering algorithms. Standard LDA-based methods often lead to clusters dominated by single elements, effectively failing to manage datasets with varied cluster sizes. Our proposed Community-Based Topic Modeling with Contextual Outlier Handling (CB-TMCOH) algorithm introduces an approach to outlier detection in text data using transformer models for similarity calculations and graph-based clustering. This method efficiently separates outliers and improves clustering in large text datasets, demonstrating its utility not only in e-commerce applications but also proving effective for news and tweets datasets.
2024
Autores
Mozolewski, M; Bobek, S; Ribeiro, RP; Nalepa, GJ; Gama, J;
Publicação
Explainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part IV
Abstract
This study introduces a method to assess the quality of Explainable Artificial Intelligence (XAI) algorithms in dynamic data streams, concentrating on the fidelity and stability of feature-importance and rule-based explanations. We employ XAI metrics, such as fidelity and Lipschitz Stability, to compare explainers between each other and introduce the Comparative Expert Stability Index (CESI) for benchmarking explainers against domain knowledge. We adopted the aforementioned metrics to the streaming data scenario and tested them in an unsupervised classification scenario with simulated distribution shifts as different classes. The necessity for adaptable explainers in complex scenarios, like failure detection is underscored, stressing the importance of continued research into versatile explanation techniques to enhance XAI system robustness and interpretability. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
Teses supervisionadas
2023
Autor
Ehsan Aminian
Instituição
UP-FCUP
2023
Autor
Ehsan Aminian
Instituição
UP-FCUP
2023
Autor
Sofia Vieira Santos Malpique Lopes
Instituição
UP-FCUP
2023
Autor
Nirbhaya Shaji
Instituição
UP-FCUP
2023
Autor
Inês Pinto e Silva
Instituição
UP-FCUP
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.