Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Interest
Topics
Details

Details

  • Name

    Luís Emanuel Pereira
  • Role

    Research Assistant
  • Since

    01st October 2022
Publications

2024

Explaining Bounding Boxes in Deep Object Detectors Using Post Hoc Methods for Autonomous Driving Systems

Authors
Nogueira, C; Fernandes, L; Fernandes, JND; Cardoso, JS;

Publication
SENSORS

Abstract
Deep learning has rapidly increased in popularity, leading to the development of perception solutions for autonomous driving. The latter field leverages techniques developed for computer vision in other domains for accomplishing perception tasks such as object detection. However, the black-box nature of deep neural models and the complexity of the autonomous driving context motivates the study of explainability in these models that perform perception tasks. Moreover, this work explores explainable AI techniques for the object detection task in the context of autonomous driving. An extensive and detailed comparison is carried out between gradient-based and perturbation-based methods (e.g., D-RISE). Moreover, several experimental setups are used with different backbone architectures and different datasets to observe the influence of these aspects in the explanations. All the techniques explored consist of saliency methods, making their interpretation and evaluation primarily visual. Nevertheless, numerical assessment methods are also used. Overall, D-RISE and guided backpropagation obtain more localized explanations. However, D-RISE highlights more meaningful regions, providing more human-understandable explanations. To the best of our knowledge, this is the first approach to obtaining explanations focusing on the regression of the bounding box coordinates.

2024

Intrinsic Explainability for End-to-End Object Detection

Authors
Fernandes, L; Fernandes, JND; Calado, M; Pinto, JR; Cerqueira, R; Cardoso, JS;

Publication
IEEE ACCESS

Abstract
Deep Learning models are automating many daily routine tasks, indicating that in the future, even high-risk tasks will be automated, such as healthcare and automated driving areas. However, due to the complexity of such deep learning models, it is challenging to understand their reasoning. Furthermore, the black box nature of the designed deep learning models may undermine public confidence in critical areas. Current efforts on intrinsically interpretable models focus only on classification tasks, leaving a gap in models for object detection. Therefore, this paper proposes a deep learning model that is intrinsically explainable for the object detection task. The chosen design for such a model is a combination of the well-known Faster-RCNN model with the ProtoPNet model. For the Explainable AI experiments, the chosen performance metric was the similarity score from the ProtoPNet model. Our experiments show that this combination leads to a deep learning model that is able to explain its classifications, with similarity scores, using a visual bag of words, which are called prototypes, that are learned during the training process. Furthermore, the adoption of such an explainable method does not seem to hinder the performance of the proposed model, which achieved a mAP of 69% in the KITTI dataset and a mAP of 66% in the GRAZPEDWRI-DX dataset. Moreover, our explanations have shown a high reliability on the similarity score.

2024

Exploring the differences between Multi-task and Single-task with the use of hxplainable AI for lung nodule classification

Authors
Fernandes, L; Pereira, T; Oliveira, HP;

Publication
2024 IEEE 37TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS 2024

Abstract
Currently, lung cancer is one of the deadliest diseases that affects millions of people globally. However, Artificial Intelligence is being increasingly integrated with healthcare practices, with the goal to aid in the early diagnosis of lung cancer. Although such methods have shown very promising results, they still lack transparency to the user, which consequently could make their generalised adoption a challenging task. Therefore, in this work we explore the use of post-hoc explainable methods, to better understand the inner-workings of an already established multitasking framework that executes the segmentation and the classification task of lung nodules simultaneously. The idea behind such study is to understand how a multitasking approach impacts the model's performance in the lung nodule classification task when compared to single-task models. Our results show that the multitasking approach works as an attention mechanism by aiding the model to learn more meaningful features. Furthermore, the multitasking framework was able to achieve a better performance in regard to the explainability metric, with an increase of 7% when compared to our baseline, and also during the classification and segmentation task, with an increase of 4.84% and 15.03%; for each task respectively, when also compared to the studied baselines.

2023

Multitask learning approach for lung nodule segmentation and classification in CT images

Authors
Fernandes, L; Oliveira, HP;

Publication
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2023, Istanbul, Turkiye, December 5-8, 2023

Abstract
Amongst the different types of cancer, lung cancer is the one with the highest mortality rate and consequently, there is an urgent need to develop early detection methods to improve the survival probabilities of the patients. Due to the millions of deaths that are caused annually by cancer, there is large interest int the scientific community to developed deep learning models that can be employed in computer aided diagnostic tools.Currently, in the literature, there are several works in the Radiomics field that try to develop new solutions by employing learning models for lung nodule classification. However, in these types of application, it is usually required to extract the lung nodule from the input images, while using a segmentation mask made by a radiologist. This means that in a clinical scenario, to be able to employ the developed learning models, it is required first to manually segment the lung nodule. Considering the fact that several patients are attended daily in the hospital with suspicion of lung cancer, the segmentation of each lung nodule would become a tiresome task. Furthermore, the available algorithms for automatic lung nodule segmentation are not efficient enough to be used in a real application.In response to the current limitations of the state of the art, the proposed work attempts to evaluate a multitasking approach where both the segmentation and the classification task are executed in parallel. As a baseline, we also study a sequential approach where first we employ DL models to segment the lung nodule, corp the lung nodule from the input image and then finally, we classify the cropped nodule. Our results show that the multitasking approach is better than to sequentially execute the segmentation and classification task for lung nodule classification. For instances, while the multitasking approach was able to achieve an AUC of 84.49% in the classification task, the sequential approach was only able to achieve an AUC of 72.43%. These results show that the proposed multitasking approach can become a viable alternative to the classification and segmentation of lung nodules. © 2023 IEEE.