Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CTM

2020

Soft Rotation Equivariant Convolutional Neural Networks

Autores
Castro, E; Pereira, JC; Cardoso, JS;

Publicação
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)

Abstract
A key to the generalization ability of Convolutional Neural Networks (CNNs) is the idea that patterns that appear in one region of the image have a high probability of appearing in other regions. This notion is also true for other spatial relationships, such as orientation. Motivated by the fact that in the early layers of CNNs distinct filters often encode for the same feature at different angles, we propose to incorporate the rotation equivariant prior in these models. In this work, different regularization strategies that capture the notion of approximate equivariance were designed and quantitatively evaluated in their ability to generate rotation-equivariant models and their effect on the model's capacity to generalize to unseen data. Some of these strategies consistently lead to higher test set accuracies when compared to a baseline model, on classification tasks. We conclude that the rotation equivariance prior should be adopted in the general setting when modeling visual data.

2020

Explaining ECG Biometrics: Is It All In The QRS?

Autores
Pinto, JR; Cardoso, JS;

Publicação
2020 INTERNATIONAL CONFERENCE OF THE BIOMETRICS SPECIAL INTEREST GROUP (BIOSIG)

Abstract
The literature seems to indicate that the QRS complex is the most important component of the electrocardiogram (ECG) for biometrics. To verify this claim, we use interpretability tools to explain how a convolutional neural network uses ECG signals to identify people, using on-the-person (PTB) and off-the-person (UofTDB) signals. While the QRS complex appears indeed to be a key feature on ECG biometrics, especially with cleaner signals, results indicate that, for larger populations in off-the-person settings, the QRS shares relevance with other heartbeat components, which it is essential to locate. These insights indicate that avoiding excessive focus on the QRS complex, using decision explanations during training, could be useful for model regularisation.

2020

Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms

Autores
Schaffter, T; Buist, DSM; Lee, CI; Nikulin, Y; Ribli, D; Guan, Y; Lotter, W; Jie, Z; Du, H; Wang, S; Feng, J; Feng, M; Kim, HE; Albiol, F; Albiol, A; Morrell, S; Wojna, Z; Ahsen, ME; Asif, U; Jimeno Yepes, A; Yohanandan, S; Rabinovici Cohen, S; Yi, D; Hoff, B; Yu, T; Chaibub Neto, E; Rubin, DL; Lindholm, P; Margolies, LR; McBride, RB; Rothstein, JH; Sieh, W; Ben Ari, R; Harrer, S; Trister, A; Friend, S; Norman, T; Sahiner, B; Strand, F; Guinney, J; Stolovitzky, G; Mackey, L; Cahoon, J; Shen, L; Sohn, JH; Trivedi, H; Shen, Y; Buturovic, L; Pereira, JC; Cardoso, JS; Castro, E; Kalleberg, KT; Pelka, O; Nedjar, I; Geras, KJ; Nensa, F; Goan, E; Koitka, S; Caballero, L; Cox, DD; Krishnaswamy, P; Pandey, G; Friedrich, CM; Perrin, D; Fookes, C; Shi, B; Cardoso Negrie, G; Kawczynski, M; Cho, K; Khoo, CS; Lo, JY; Sorensen, AG; Jung, H;

Publicação
JAMA NETWORK OPEN

Abstract
Importance Mammography screening currently relies on subjective human interpretation. Artificial intelligence (AI) advances could be used to increase mammography screening accuracy by reducing missed cancers and false positives. Objective To evaluate whether AI can overcome human mammography interpretation limitations with a rigorous, unbiased evaluation of machine learning algorithms. Design, Setting, and Participants In this diagnostic accuracy study conducted between September 2016 and November 2017, an international, crowdsourced challenge was hosted to foster AI algorithm development focused on interpreting screening mammography. More than 1100 participants comprising 126 teams from 44 countries participated. Analysis began November 18, 2016. Main Outcomes and Measurements Algorithms used images alone (challenge 1) or combined images, previous examinations (if available), and clinical and demographic risk factor data (challenge 2) and output a score that translated to cancer yes/no within 12 months. Algorithm accuracy for breast cancer detection was evaluated using area under the curve and algorithm specificity compared with radiologists' specificity with radiologists' sensitivity set at 85.9% (United States) and 83.9% (Sweden). An ensemble method aggregating top-performing AI algorithms and radiologists' recall assessment was developed and evaluated. Results Overall, 144 & x202f;231 screening mammograms from 85 & x202f;580 US women (952 cancer positive <= 12 months from screening) were used for algorithm training and validation. A second independent validation cohort included 166 & x202f;578 examinations from 68 & x202f;008 Swedish women (780 cancer positive). The top-performing algorithm achieved an area under the curve of 0.858 (United States) and 0.903 (Sweden) and 66.2% (United States) and 81.2% (Sweden) specificity at the radiologists' sensitivity, lower than community-practice radiologists' specificity of 90.5% (United States) and 98.5% (Sweden). Combining top-performing algorithms and US radiologist assessments resulted in a higher area under the curve of 0.942 and achieved a significantly improved specificity (92.0%) at the same sensitivity. Conclusions and Relevance While no single AI algorithm outperformed radiologists, an ensemble of AI algorithms combined with radiologist assessment in a single-reader screening environment improved overall accuracy. This study underscores the potential of using machine learning methods for enhancing mammography screening interpretation. Question How do deep learning algorithms perform compared with radiologists in screening mammography interpretation? Findings In this diagnostic accuracy study using 144 & x202f;231 screening mammograms from 85 & x202f;580 women from the United States and 166 & x202f;578 screening mammograms from 68 & x202f;008 women from Sweden, no single artificial intelligence algorithm outperformed US community radiologist benchmarks; including clinical data and prior mammograms did not improve artificial intelligence performance. However, combining best-performing artificial intelligence algorithms with single-radiologist assessment demonstrated increased specificity. Meaning Integrating artificial intelligence to mammography interpretation in single-radiologist settings could yield significant performance improvements, with the potential to reduce health care system expenditures and address resource scarcity experienced in population-based screening programs. This diagnostic accuracy study evaluates whether artificial intelligence can overcome human mammography interpretation limits with a rigorous, unbiased evaluation of machine learning algorithms.

2020

Video Summarization through Total Variation, Deep Semi-supervised Autoencoder and Clustering Algorithms

Autores
da Silva, EP; Ramos, EM; da Silva, LT; Cardoso, JS; Giraldi, GA;

Publicação
VISAPP: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 4: VISAPP

Abstract
Video summarization is an important tool considering the amount of data to analyze. Techniques in this area aim to yield synthetic and useful visual abstraction of the videos contents. Hence, in this paper we present a new summarization algorithm, based on image features, which is composed by the following steps: (i) Query video processing using cosine similarity metric and total variation smoothing to identify classes in the query sequence; (ii) With this result, build a labeled training set of frames; (iii) Generate the unlabeled training set composed by samples of the video database; (iv) Training a deep semi-supervised autoencoder; (v) Compute the K-means for each video separately, in the encoder space, with the number of clusters set as a percentage of the video size; (vi) Select key-frames in the K-means clusters to define the summaries. In this methodology, the query video is used to incorporate prior knowledge in the whole process through the obtained labeled data. The step (iii) aims to include unknown patterns useful for the summarization process. We evaluate the methodology using some videos from OPV video database. We compare the performance of our algorithm with the VSum. The results indicate that the pipeline was well succeed in the summarization presenting a F-score value superior to VSum.

2020

Correction to: Interpretable and Annotation-Efficient Learning for Medical Image Computing

Autores
Cardoso, JS; Nguyen, HV; Heller, N; Abreu, PH; Isgum, I; Silva, W; Cruz, R; Amorim, JP; Patel, V; Roysam, B; Zhou, SK; Jiang, SB; Le, N; Luu, K; Sznitman, R; Cheplygina, V; Mateus, D; Trucco, E; Sureshjani, SA;

Publicação
Interpretable and Annotation-Efficient Learning for Medical Image Computing - Third International Workshop, iMIMIC 2020, Second International Workshop, MIL3ID 2020, and 5th International Workshop, LABELS 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4-8, 2020, Proceedings

Abstract

2020

Interpretability-Guided Content-Based Medical Image Retrieval

Autores
Silva, W; Pollinger, A; Cardoso, JS; Reyes, M;

Publicação
Medical Image Computing and Computer Assisted Intervention - MICCAI 2020 - 23rd International Conference, Lima, Peru, October 4-8, 2020, Proceedings, Part I

Abstract
When encountering a dubious diagnostic case, radiologists typically search in public or internal databases for similar cases that would help them in their decision-making process. This search represents a massive burden to their workflow, as it considerably reduces their time to diagnose new cases. It is, therefore, of utter importance to replace this manual intensive search with an automatic content-based image retrieval system. However, general content-based image retrieval systems are often not helpful in the context of medical imaging since they do not consider the fact that relevant information in medical images is typically spatially constricted. In this work, we explore the use of interpretability methods to localize relevant regions of images, leading to more focused feature representations, and, therefore, to improved medical image retrieval. As a proof-of-concept, experiments were conducted using a publicly available Chest X-ray dataset, with results showing that the proposed interpretability-guided image retrieval translates better the similarity measure of an experienced radiologist than state-of-the-art image retrieval methods. Furthermore, it also improves the class-consistency of top retrieved results, and enhances the interpretability of the whole system, by accompanying the retrieval with visual explanations. © Springer Nature Switzerland AG 2020.

  • 123
  • 368