2020
Autores
Goncalves, T; Silva, W; Cardoso, MJ; Cardoso, JS;
Publicação
HEALTH AND TECHNOLOGY
Abstract
The implementation of routine breast cancer screening and better treatment strategies made possible to offer to the majority of women the option of breast conservation instead of a mastectomy. The most important aim of breast cancer conservative treatment (BCCT) is to try to optimize aesthetic outcome and implicitly, quality of life (QoL) without jeopardizing local cancer control and overall survival. As a consequence of the impact aesthetic outcome has on QoL, there has been an effort to try to define an optimal tool capable of performing this type of evaluation. Starting from the classical subjective aesthetic evaluation of BCCT (either by the patient herself or by a group of clinicians through questionnaires) to an objective aesthetic evaluation (where machine learning and computer vision methods are employed), leads to less variability and increasing reproducibility of results. Currently, there are some offline software applications available such as BAT(c) and BCCT.core, which perform the assessment based on asymmetry measurements that are computed based on semi-automatically annotated keypoints. In the literature, one can find algorithms that attempt to do the completely automatic keypoint annotation with reasonable success. However, these algorithms are very time-consuming. As the course of research goes more and more into the development of web software applications, these time-consuming tasks are not desirable. In this work, we propose a novel approach to the keypoints detection task treating the problem in part as image segmentation. This novel approach can improve both execution-time and results.
2020
Autores
Sequeira, AF; Silva, W; Pinto, JR; Goncalves, T; Cardoso, JS;
Publicação
2020 8TH INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF 2020)
Abstract
Presentation attack detection (PAD) methods are commonly evaluated using metrics based on the predicted labels. This is a limitation, especially for more elusive methods based on deep learning which can freely learn the most suitable features. Though often being more accurate, these models operate as complex black boxes which makes the inner processes that sustain their predictions still baffling. Interpretability tools are now being used to delve deeper into the operation of machine learning methods, especially artificial networks, to better understand how they reach their decisions. In this paper, we make a case for the integration of interpretability tools in the evaluation of PAD. A simple model for face PAD, based on convolutional neural networks, was implemented and evaluated using both traditional metrics (APCER, BPCER and EER) and interpretability tools (Grad-CAM), using data from the ROSE Youtu video collection. The results show that interpretability tools can capture more completely the intricate behavior of the implemented model, and enable the identification of certain properties that should be verified by a PAD method that is robust, coherent, meaningful, and can adequately generalize to unseen data and attacks. One can conclude that, with further efforts devoted towards higher objectivity in interpretability, this can be the key to obtain deeper and more thorough PAD performance evaluation setups.
2020
Autores
Gonçalves, T; Silva, W; Cardoso, MJ; Cardoso, JS;
Publicação
Proceedings
Abstract
2020
Autores
Silva, W; Pollinger, A; Cardoso, JS; Reyes, M;
Publicação
Medical Image Computing and Computer Assisted Intervention - MICCAI 2020 - 23rd International Conference, Lima, Peru, October 4-8, 2020, Proceedings, Part I
Abstract
When encountering a dubious diagnostic case, radiologists typically search in public or internal databases for similar cases that would help them in their decision-making process. This search represents a massive burden to their workflow, as it considerably reduces their time to diagnose new cases. It is, therefore, of utter importance to replace this manual intensive search with an automatic content-based image retrieval system. However, general content-based image retrieval systems are often not helpful in the context of medical imaging since they do not consider the fact that relevant information in medical images is typically spatially constricted. In this work, we explore the use of interpretability methods to localize relevant regions of images, leading to more focused feature representations, and, therefore, to improved medical image retrieval. As a proof-of-concept, experiments were conducted using a publicly available Chest X-ray dataset, with results showing that the proposed interpretability-guided image retrieval translates better the similarity measure of an experienced radiologist than state-of-the-art image retrieval methods. Furthermore, it also improves the class-consistency of top retrieved results, and enhances the interpretability of the whole system, by accompanying the retrieval with visual explanations. © Springer Nature Switzerland AG 2020.
2021
Autores
Montenegro, H; Silva, W; Cardoso, JS;
Publicação
IEEE ACCESS
Abstract
Although Deep Learning models have achieved incredible results in medical image classification tasks, their lack of interpretability hinders their deployment in the clinical context. Case-based interpretability provides intuitive explanations, as it is a much more human-like approach than saliency-map-based interpretability. Nonetheless, since one is dealing with sensitive visual data, there is a high risk of exposing personal identity, threatening the individuals' privacy. In this work, we propose a privacy-preserving generative adversarial network for the privatization of case-based explanations. We address the weaknesses of current privacy-preserving methods for visual data from three perspectives: realism, privacy, and explanatory value. We also introduce a counterfactual module in our Generative Adversarial Network that provides counterfactual case-based explanations in addition to standard factual explanations. Experiments were performed in a biometric and medical dataset, demonstrating the network's potential to preserve the privacy of all subjects and keep its explanatory evidence while also maintaining a decent level of intelligibility.
2021
Autores
Sequeira, AF; Goncalves, T; Silva, W; Pinto, JR; Cardoso, JS;
Publicação
IET BIOMETRICS
Abstract
Biometric recognition and presentation attack detection (PAD) methods strongly rely on deep learning algorithms. Though often more accurate, these models operate as complex black boxes. Interpretability tools are now being used to delve deeper into the operation of these methods, which is why this work advocates their integration in the PAD scenario. Building upon previous work, a face PAD model based on convolutional neural networks was implemented and evaluated both through traditional PAD metrics and with interpretability tools. An evaluation on the stability of the explanations obtained from testing models with attacks known and unknown in the learning step is made. To overcome the limitations of direct comparison, a suitable representation of the explanations is constructed to quantify how much two explanations differ from each other. From the point of view of interpretability, the results obtained in intra and inter class comparisons led to the conclusion that the presence of more attacks during training has a positive effect in the generalisation and robustness of the models. This is an exploratory study that confirms the urge to establish new approaches in biometrics that incorporate interpretability tools. Moreover, there is a need for methodologies to assess and compare the quality of explanations.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.