2022
Authors
Silva, W; Goncalves, T; Harma, K; Schroder, E; Obmann, VC; Barroso, MC; Poellinger, A; Reyes, M; Cardoso, JS;
Publication
SCIENTIFIC REPORTS
Abstract
Currently, radiologists face an excessive workload, which leads to high levels of fatigue, and consequently, to undesired diagnosis mistakes. Decision support systems can be used to prioritize and help radiologists making quicker decisions. In this sense, medical content-based image retrieval systems can be of extreme utility by providing well-curated similar examples. Nonetheless, most medical content-based image retrieval systems work by finding the most similar image, which is not equivalent to finding the most similar image in terms of disease and its severity. Here, we propose an interpretability-driven and an attention-driven medical image retrieval system. We conducted experiments in a large and publicly available dataset of chest radiographs with structured labels derived from free-text radiology reports (MIMIC-CXR-JPG). We evaluated the methods on two common conditions: pleural effusion and (potential) pneumonia. As ground-truth to perform the evaluation, query/test and catalogue images were classified and ordered by an experienced board-certified radiologist. For a profound and complete evaluation, additional radiologists also provided their rankings, which allowed us to infer inter-rater variability, and yield qualitative performance levels. Based on our ground-truth ranking, we also quantitatively evaluated the proposed approaches by computing the normalized Discounted Cumulative Gain (nDCG). We found that the Interpretability-guided approach outperforms the other state-of-the-art approaches and shows the best agreement with the most experienced radiologist. Furthermore, its performance lies within the observed inter-rater variability.
2022
Authors
Neto, PC; Gonçalves, T; Pinto, JR; Silva, W; Sequeira, AF; Ross, A; Cardoso, JS;
Publication
CoRR
Abstract
2023
Authors
Ferreira, G; Teixeira, M; Belo, R; Silva, W; Cardoso, JS;
Publication
2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN
Abstract
The application of machine learning algorithms to predict the mechanism of action (MoA) of drugs can be highly valuable and enable the discovery of new uses for known molecules. The developed methods are usually evaluated with small subsets of MoAs with large support, leading to deceptively good generalization. However, these datasets may not accurately represent a practical use, due to the limited number of target MoAs. Accurate predictions for these rare drugs are important for drug discovery and should be a point of focus. In this work, we explore different training strategies to improve the performance of a well established deep learning model for rare drug MoA prediction. We explored transfer learning by first learning a model for common MoAs, and then using it to initialize the learning of another model for rarer MoAs. We also investigated the use of a cascaded methodology, in which results from an initial model are used as additional inputs to the model for rare MoAs. Finally, we proposed and tested an extension of Mixup data augmentation for multilabel classification. The baseline model showed an AUC of 73.2% for common MoAs and 62.4% for rarer classes. From the investigated methods, Mixup alone failed to improve the performance of a baseline classifier. Nonetheless, the other proposed methods outperformed the baseline for rare classes. Transfer Learning was preferred in predicting classes with less than 10 training samples, while the cascaded classifiers (with Mixup) showed better predictions for MoAs with more than 10 samples. However, the performance for rarer MoAs still lags behind the performance for frequent MoAs and is not sufficient for the reliable prediction of rare MoAs.
2023
Authors
Silva, D; Agrotis, G; Tan, RB; Teixeira, LF; Silva, W;
Publication
International Conference on Machine Learning and Applications, ICMLA 2023, Jacksonville, FL, USA, December 15-17, 2023
Abstract
Deep Learning models are tremendously valuable in several prediction tasks, and their use in the medical field is spreading abruptly, especially in computer vision tasks, evaluating the content in X-rays, CTs or MRIs. These methods can save a significant amount of time for doctors in patient diagnostics and help in treatment planning. However, these models are significantly sensitive to confounders in the training data and generally suffer a performance hit when dealing with out-of-distribution data, affecting their reliability and scalability in different medical institutions. Deep Learning research on Medical datasets may overlook essential details regarding the image acquisition procedure and the preprocessing steps. This work proposes a data-centric approach, exploring the potential of attention maps as a regularisation technique to improve robustness and generalisation. We use image metadata and explore self-attention maps and contrastive learning to promote feature space invariance to image disturbance. Experiments were conducted using Chest X-ray datasets that are publicly available. Some datasets contained information about the windowing settings applied by the radiologist, acting as a source of variability. The proposed model was tested and outperformed the baseline in out-of-distribution data, serving as a proof of concept. © 2023 IEEE.
2024
Authors
Campos, F; Petrychenko, L; Teixeira, LF; Silva, W;
Publication
Proceedings of the First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED 2024) co-located with 27th European Conference on Artificial Intelligence (ECAI 2024), Santiago de Compostela, Spain, October 20, 2024.
Abstract
Deep-learning techniques can improve the efficiency of medical diagnosis while challenging human experts’ accuracy. However, the rationale behind these classifier’s decisions is largely opaque, which is dangerous in sensitive applications such as healthcare. Case-based explanations explain the decision process behind these mechanisms by exemplifying similar cases using previous studies from other patients. Yet, these may contain personally identifiable information, which makes them impossible to share without violating patients’ privacy rights. Previous works have used GANs to generate anonymous case-based explanations, which had limited visual quality. We solve this issue by employing a latent diffusion model in a three-step procedure: generating a catalogue of synthetic images, removing the images that closely resemble existing patients, and using this anonymous catalogue during an explanation retrieval process. We evaluate the proposed method on the MIMIC-CXR-JPG dataset and achieve explanations that simultaneously have high visual quality, are anonymous, and retain their explanatory value.
2024
Authors
Vieira, M; Goncalves, T; Silva, W; Sequeira, F;
Publication
BIOSIG 2024 - Proceedings of the 23rd International Conference of the Biometrics Special Interest Group
Abstract
The proliferation of explicit material online, particularly pornography, has emerged as a paramount concern in our society. While state-of-the-art pornography detection models already show some promising results, their decision-making processes are often opaque, raising ethical issues. This study focuses on uncovering the decision-making process of such models, specifically fine-tuned convolutional neural networks and transformer architectures. We compare various explainability techniques to illuminate the limitations, potential improvements, and ethical implications of using these algorithms. Results show that models trained on diverse and dynamic datasets tend to have more robustness and generalisability when compared to models trained on static datasets. Additionally, transformer models demonstrate superior performance and generalisation compared to convolutional ones. Furthermore, we implemented a privacy-preserving framework during explanation retrieval, which contributes to developing secure and ethically sound biometric applications. © 2024 IEEE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.