2017
Autores
Galdran, Adrian; Gila, AitorAlvarez; Meyer, MariaInes; Saratxaga, CristinaLopez; Araujo, Teresa; Garrote, Estibaliz; Aresta, Guilherme; Costa, Pedro; Mendonça, AnaMaria; Campilho, AurelioJ.C.;
Publicação
CoRR
Abstract
2017
Autores
Costa, Pedro; Galdran, Adrian; Meyer, MariaInes; Abràmoff, MichaelDavid; Niemeijer, Meindert; Mendonça, AnaMaria; Campilho, Aurelio;
Publicação
CoRR
Abstract
2023
Autores
Graham, S; Vu, QD; Jahanifar, M; Weigert, M; Schmidt, U; Zhang, W; Zhang, J; Yang, S; Xiang, J; Wang, X; Rumberger, JL; Baumann, E; Hirsch, P; Liu, L; Hong, C; Avilés Rivero, AI; Jain, A; Ahn, H; Hong, Y; Azzuni, H; Xu, M; Yaqub, M; Blache, MC; Piégu, B; Vernay, B; Scherr, T; Böhland, M; Löffler, K; Li, J; Ying, W; Wang, C; Kainmueller, D; Schönlieb, CB; Liu, S; Talsania, D; Meda, Y; Mishra, P; Ridzuan, M; Neumann, O; Schilling, MP; Reischl, M; Mikut, R; Huang, B; Chien, HC; Wang, CP; Lee, CY; Lin, HK; Liu, Z; Pan, X; Han, C; Cheng, J; Dawood, M; Deshpande, S; Saad Bashir, RM; Shephard, A; Costa, P; Nunes, JD; Campilho, A; Cardoso, JS; S, HP; Puthussery, D; G, DR; V, JC; Zhang, Y; Fang, Z; Lin, Z; Zhang, Y; Lin, C; Zhang, L; Mao, L; Wu, M; Vi Vo, TT; Kim, SH; Lee, T; Kondo, S; Kasai, S; Dumbhare, P; Phuse, V; Dubey, Y; Jamthikar, A; Le Vuong, TT; Kwak, JT; Ziaei, D; Jung, H; Miao, T; Snead, DRJ; Ahmed Raza, SE; Minhas, F; Rajpoot, NM;
Publicação
CoRR
Abstract
2022
Autores
Costa, P; Fu, Y; Nunes, J; Campilho, A; Cardoso, JS;
Publicação
CoRR
Abstract
2023
Autores
Pedrosa, J; Sousa, P; Silva, J; Mendonça, AM; Campilho, A;
Publicação
2023 IEEE 36TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS
Abstract
Chest radiography is one of the most ubiquitous medical imaging modalities. Nevertheless, the interpretation of chest radiography images is time-consuming, complex and subject to observer variability. As such, automated diagnosis systems for pathology detection have been proposed, aiming to reduce the burden on radiologists. The advent of deep learning has fostered the development of solutions for both abnormality detection with promising results. However, these tools suffer from poor explainability as the reasons that led to a decision cannot be easily understood, representing a major hurdle for their adoption in clinical practice. In order to overcome this issue, a method for chest radiography abnormality detection is presented which relies on an object detection framework to detect individual findings and thus separate normal and abnormal CXRs. It is shown that this framework is capable of an excellent performance in abnormality detection (AUC: 0.993), outperforming other state-of-the-art classification methodologies (AUC: 0.976 using the same classes). Furthermore, validation on external datasets shows that the proposed framework has a smaller drop in performance when applied to previously unseen data (21.9% vs 23.4% on average). Several approaches for object detection are compared and it is shown that merging pathology classes to minimize radiologist variability improves the localization of abnormal regions (0.529 vs 0.491 APF when using all pathology classes), resulting in a network which is more explainable and thus more suitable for integration in clinical practice.
2023
Autores
Rocha, J; Mendonça, AM; Pereira, SC; Campilho, A;
Publicação
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2023, Istanbul, Turkiye, December 5-8, 2023
Abstract
The integration of explanation techniques promotes the comprehension of a model's output and contributes to its interpretation e.g. by generating heat maps highlighting the most decisive regions for that prediction. However, there are several drawbacks to the current heat map-generating methods. Probability by itself is not indicative of the model's conviction in a prediction, as it is influenced by multiple factors, such as class imbalance. Consequently, it is possible that a model yields two true positive predictions - one with an accurate explanation map, and the other with an inaccurate one. Current state-of-the-art explanations are not able to distinguish both scenarios and alert the user to dubious explanations. The goal of this work is to represent these maps more intuitively based on how confident the model is regarding the diagnosis, by adding an extra validation step over the state-of-the-art results that indicates whether the user should trust the initial explanation or not. The proposed method, Confident-CAM, facilitates the interpretation of the results by measuring the distance between the output probability and the corresponding class threshold, using a confidence score to generate nearly null maps when the initial explanations are most likely incorrect. This study implements and validates the proposed algorithm on a multi-label chest X-ray classification exercise, targeting 14 radiological findings in the ChestX-Ray14 dataset with significant class imbalance. Results indicate that confidence scores can distinguish likely accurate and inaccurate explanations. Code available via GitHub. © 2023 IEEE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.