Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by João Manuel Pedrosa

2024

Lightweight 3D CNN for the Segmentation of Coronary Calcifications and Calcium Scoring

Authors
Santos, R; Baeza, R; Filipe, VM; Renna, F; Paredes, H; Pedrosa, J;

Publication
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024

Abstract
Coronary artery calcium is a good indicator of coronary artery disease and can be used for cardiovascular risk stratification. Over the years, different deep learning approaches have been proposed to automatically segment coronary calcifications in computed tomography scans and measure their extent through calcium scores. However, most methodologies have focused on using 2D architectures which neglect most of the information present in those scans. In this work, we use a 3D convolutional neural network capable of leveraging the 3D nature of computed tomography scans and including more context in the segmentation process. In addition, the selected network is lightweight, which means that we can have 3D convolutions while having low memory requirements. Our results show that the predictions of the model, trained on the COCA dataset, are close to the ground truth for the majority of the patients in the test set obtaining a Dice score of 0.90 +/- 0.16 and a Cohen's linearly weighted kappa of 0.88 in Agatston score risk categorization. In conclusion, our approach shows promise in the tasks of segmenting coronary artery calcifications and predicting calcium scores with the objectives of optimizing clinical workflow and performing cardiovascular risk stratification.

2020

LNDb Dataset

Authors
Pedrosa, J; Aresta, G; Ferreira, CA; Rodrigues, M; Leitão, P; Carvalho, AS; Rebelo, J; Negrão, E; Ramos, I; Cunha, A; Campilho, A;

Publication

Abstract

2022

LNDb Dataset

Authors
Pedrosa, J; Aresta, G; Ferreira, CA; Rodrigues, M; Leitão, P; Carvalho, AS; Rebelo, J; Negrão, E; Ramos, I; Cunha, A; Campilho, A;

Publication

Abstract

2024

MedShapeNet - a large-scale dataset of 3D medical shapes for computer vision

Authors
Li, JN; Zhou, ZW; Yang, JC; Pepe, A; Gsaxner, C; Luijten, G; Qu, CY; Zhang, TZ; Chen, XX; Li, WX; Wodzinski, M; Friedrich, P; Xie, KX; Jin, Y; Ambigapathy, N; Nasca, E; Solak, N; Melito, GM; Vu, VD; Memon, AR; Schlachta, C; De Ribaupierre, S; Patel, R; Eagleson, R; Chen, XJ; Mächler, H; Kirschke, JS; de la Rosa, E; Christ, PF; Li, HB; Ellis, DG; Aizenberg, MR; Gatidis, S; Küstner, T; Shusharina, N; Heller, N; Andrearczyk, V; Depeursinge, A; Hatt, M; Sekuboyina, A; Löffler, MT; Liebl, H; Dorent, R; Vercauteren, T; Shapey, J; Kujawa, A; Cornelissen, S; Langenhuizen, P; Ben-Hamadou, A; Rekik, A; Pujades, S; Boyer, E; Bolelli, F; Grana, C; Lumetti, L; Salehi, H; Ma, J; Zhang, Y; Gharleghi, R; Beier, S; Sowmya, A; Garza-Villarreal, EA; Balducci, T; Angeles-Valdez, D; Souza, R; Rittner, L; Frayne, R; Ji, YF; Ferrari, V; Chatterjee, S; Dubost, F; Schreiber, S; Mattern, H; Speck, O; Haehn, D; John, C; Nürnberger, A; Pedrosa, J; Ferreira, C; Aresta, G; Cunha, A; Campilho, A; Suter, Y; Garcia, J; Lalande, A; Vandenbossche, V; Van Oevelen, A; Duquesne, K; Mekhzoum, H; Vandemeulebroucke, J; Audenaert, E; Krebs, C; van Leeuwen, T; Vereecke, E; Heidemeyer, H; Röhrig, R; Hölzle, F; Badeli, V; Krieger, K; Gunzer, M; Chen, JX; van Meegdenburg, T; Dada, A; Balzer, M; Fragemann, J; Jonske, F; Rempe, M; Malorodov, S; Bahnsen, FH; Seibold, C; Jaus, A; Marinov, Z; Jaeger, PF; Stiefelhagen, R; Santos, AS; Lindo, M; Ferreira, A; Alves, V; Kamp, M; Abourayya, A; Nensa, F; Hörst, F; Brehmer, A; Heine, L; Hanusrichter, Y; Wessling, M; Dudda, M; Podleska, LE; Fink, MA; Keyl, J; Tserpes, K; Kim, MS; Elhabian, S; Lamecker, H; Zukic, D; Paniagua, B; Wachinger, C; Urschler, M; Duong, L; Wasserthal, J; Hoyer, PF; Basu, O; Maal, T; Witjes, MJH; Schiele, G; Chang, TC; Ahmadi, SA; Luo, P; Menze, B; Reyes, M; Deserno, TM; Davatzikos, C; Puladi, B; Fua, P; Yuille, AL; Kleesiek, J; Egger, J;

Publication
BIOMEDICAL ENGINEERING-BIOMEDIZINISCHE TECHNIK

Abstract
Objectives: The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. Methods: We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. Results: By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Conclusions: MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/.

2024

AUTOMATED VISCERAL AND SUBCUTANEOUS FAT SEGMENTATION IN COMPUTED TOMOGRAPHY

Authors
Castro, R; Sousa, I; Nunes, F; Mancio, J; Fontes-Carvalho, R; Ferreira, C; Pedrosa, J;

Publication
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
Cardiovascular diseases are the leading causes of death worldwide. While there are a number of cardiovascular risk indicators, recent studies have found a connection between cardiovascular risk and the accumulation and characteristics of visceral adipose tissue in the ventral cavity. The quantification of visceral adipose tissue can be easily performed in computed tomography scans but the manual delineation of these structures is a time consuming process subject to variability. This has motivated the development of automatic tools to achieve a faster and more precise solution. This paper explores the use of a U-Net architecture to perform ventral cavity segmentation followed by the use of threshold-based approaches for visceral and subcutaneous adipose tissue segmentation. Experiments with different learning rates, input image sizes and types of loss functions were employed to assess the hyperparameters most suited to this problem. In an external test set, the ventral cavity segmentation model with the best performance achieved a 0.967 Dice Score Coefficient, while the visceral and subcutaneous adipose tissue achieve Dice Score Coefficients of 0.986 and 0.995. Not only are these competitive results when compared to state of the art, the interobserver variability measured in this external dataset was similar to these results confirming the robustness and reliability of the proposed segmentation.

2024

DeepClean - Contrastive Learning Towards Quality Assessment in Large-Scale CXR Data Sets

Authors
Pereira, SC; Pedrosa, J; Rocha, J; Sousa, P; Campilho, A; Mendonça, AM;

Publication
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024, Lisbon, Portugal, December 3-6, 2024

Abstract
Large-scale datasets are essential for training deep learning models in medical imaging. However, many of these datasets contain poor-quality images that can compromise model performance and clinical reliability. In this study, we propose a framework to detect non-compliant images, such as corrupted scans, incomplete thorax X-rays, and images of non-thoracic body parts, by leveraging contrastive learning for feature extraction and parametric or non-parametric scoring methods for out-of-distribution ranking. Our approach was developed and tested on the CheXpert dataset, achieving an AUC of 0.75 in a manually labeled subset of 1,000 images, and further qualitatively and visually validated on the external PadChest dataset, where it also performed effectively. Our results demonstrate the potential of contrastive learning to detect non-compliant images in large-scale medical datasets, laying the foundation for future work on reducing dataset pollution and improving the robustness of deep learning models in clinical practice. © 2024 IEEE.

  • 11
  • 13