Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por João Manuel Pedrosa

2024

Leveraging Longitudinal Data for Cardiomegaly and Change Detection in Chest Radiography

Autores
Belo, R; Rocha, J; Pedrosa, J;

Publicação
PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2023, PT I

Abstract
Chest radiography has been widely used for automatic analysis through deep learning (DL) techniques. However, in the manual analysis of these scans, comparison with images at previous time points is commonly done, in order to establish a longitudinal reference. The usage of longitudinal information in automatic analysis is not a common practice, but it might provide relevant information for desired output. In this work, the application of longitudinal information for the detection of cardiomegaly and change in pairs of CXR images was studied. Multiple experiments were performed, where the inclusion of longitudinal information was done at the features level and at the input level. The impact of the alignment of the image pairs (through a developed method) was also studied. The usage of aligned images was revealed to improve the final mcs for both the detection of pathology and change, in comparison to a standard multi-label classifier baseline. The model that uses concatenated image features outperformed the remaining, with an Area Under the Receiver Operating Characteristics Curve (AUC) of 0.858 for change detection, and presenting an AUC of 0.897 for the detection of pathology, showing that pathology features can be used to predict more efficiently the comparison between images. In order to further improve the developed methods, data augmentation techniques were studied. These proved that increasing the representation of minority classes leads to higher noise in the dataset. It also showed that neglecting the temporal order of the images can be an advantageous augmentation technique in longitudinal change studies.

2016

Cardiac Chamber Volumetric Assessment Using 3D Ultrasound - A Review

Autores
Pedrosa, J; Barbosa, D; Almeida, N; Bernard, O; Bosch, J; D'hooge, J;

Publicação
CURRENT PHARMACEUTICAL DESIGN

Abstract
When designing clinical trials for testing novel cardiovascular therapies, it is highly relevant to understand what a given technology can provide in terms of information on the physiologic status of the heart and vessels. Ultrasound imaging has traditionally been the modality of choice to study the cardiovascular system as it has an excellent temporal resolution; it operates in real-time; it is very widespread and - not unimportant - it is cheap. Although this modality is mostly known clinically as a two-dimensional technology, it has recently matured into a true three-dimensional imaging technique. In this review paper, an overview is given of the available ultrasound technology for cardiac chamber quantification in terms of volume and function and evidence is given why these parameters are of value when testing the effect of new cardiovascular therapies.

2023

DEEPBEAS3D: Deep Learning and B-Spline EXPLICIT Active Surfaces

Autores
Williams H.; Pedrosa J.; Asad M.; Cattani L.; Vercauteren T.; Deprest J.; D'Hooge J.;

Publicação
IEEE International Ultrasonics Symposium, IUS

Abstract
Deep learning-based automatic segmentation methods have become state-of-the-art. However, they are often not robust enough for direct clinical application, as domain shifts between training and testing data affect their performance. Failure in automatic segmentation can cause sub-optimal results that require correction. To address these problems, we propose a novel 3D extension of an interactive segmentation framework that represents a segmentation from a convolutional neural network (CNN) as a B-spline explicit active surface (BEAS). BEAS ensures segmentations are smooth in 3D space, increasing anatomical plausibility, while allowing the user to precisely edit the 3D surface. We apply this framework to the task of 3D segmentation of the anal sphincter complex (AS) from transperineal ultrasound (TPUS) images, and compare it to the clinical tool used in the pelvic floor disorder clinic (4D View VOCAL, GE Healthcare; Zipf, Austria). Experimental results show that: 1) the proposed framework gives the user explicit control of the surface contour; 2) the perceived workload calculated via the NASA-TLX index was reduced by 30% compared to VOCAL; and 3) it required 70% (170 seconds) less user time than VOCAL (p< 0.00001).

2023

MITEA: A dataset for machine learning segmentation of the left ventricle in 3D echocardiography using subject-specific labels from cardiac magnetic resonance imaging

Autores
Zhao, DB; Ferdian, E; Talou, GDM; Quill, GM; Gilbert, K; Wang, VY; Gamage, TPB; Pedrosa, J; D'hooge, J; Sutton, TM; Lowe, BS; Legget, ME; Ruygrok, PN; Doughty, RN; Camara, O; Young, AA; Nash, MP;

Publicação
FRONTIERS IN CARDIOVASCULAR MEDICINE

Abstract
Segmentation of the left ventricle (LV) in echocardiography is an important task for the quantification of volume and mass in heart disease. Continuing advances in echocardiography have extended imaging capabilities into the 3D domain, subsequently overcoming the geometric assumptions associated with conventional 2D acquisitions. Nevertheless, the analysis of 3D echocardiography (3DE) poses several challenges associated with limited spatial resolution, poor contrast-to-noise ratio, complex noise characteristics, and image anisotropy. To develop automated methods for 3DE analysis, a sufficiently large, labeled dataset is typically required. However, ground truth segmentations have historically been difficult to obtain due to the high inter-observer variability associated with manual analysis. We address this lack of expert consensus by registering labels derived from higher-resolution subject-specific cardiac magnetic resonance (CMR) images, producing 536 annotated 3DE images from 143 human subjects (10 of which were excluded). This heterogeneous population consists of healthy controls and patients with cardiac disease, across a range of demographics. To demonstrate the utility of such a dataset, a state-of-the-art, self-configuring deep learning network for semantic segmentation was employed for automated 3DE analysis. Using the proposed dataset for training, the network produced measurement biases of -9 +/- 16 ml, -1 +/- 10 ml, -2 +/- 5 %, and 5 +/- 23 g, for end-diastolic volume, end-systolic volume, ejection fraction, and mass, respectively, outperforming an expert human observer in terms of accuracy as well as scan-rescan reproducibility. As part of the Cardiac Atlas Project, we present here a large, publicly available 3DE dataset with ground truth labels that leverage the higher resolution and contrast of CMR, to provide a new benchmark for automated 3DE analysis. Such an approach not only reduces the effect of observer-specific bias present in manual 3DE annotations, but also enables the development of analysis techniques which exhibit better agreement with CMR compared to conventional methods. This represents an important step for enabling more efficient and accurate diagnostic and prognostic information to be obtained from echocardiography.

2023

Correcting bias in cardiac geometries derived from multimodal images using spatiotemporal mapping

Autores
Zhao, D; Mauger, CA; Gilbert, K; Wang, VY; Quill, GM; Sutton, TM; Lowe, BS; Legget, ME; Ruygrok, PN; Doughty, RN; Pedrosa, J; D'hooge, J; Young, AA; Nash, MP;

Publicação
SCIENTIFIC REPORTS

Abstract
Cardiovascular imaging studies provide a multitude of structural and functional data to better understand disease mechanisms. While pooling data across studies enables more powerful and broader applications, performing quantitative comparisons across datasets with varying acquisition or analysis methods is problematic due to inherent measurement biases specific to each protocol. We show how dynamic time warping and partial least squares regression can be applied to effectively map between left ventricular geometries derived from different imaging modalities and analysis protocols to account for such differences. To demonstrate this method, paired real-time 3D echocardiography (3DE) and cardiac magnetic resonance (CMR) sequences from 138 subjects were used to construct a mapping function between the two modalities to correct for biases in left ventricular clinical cardiac indices, as well as regional shape. Leave-one-out cross-validation revealed a significant reduction in mean bias, narrower limits of agreement, and higher intraclass correlation coefficients for all functional indices between CMR and 3DE geometries after spatiotemporal mapping. Meanwhile, average root mean squared errors between surface coordinates of 3DE and CMR geometries across the cardiac cycle decreased from 7 +/- 1 to 4 +/- 1 mm for the total study population. Our generalised method for mapping between time-varying cardiac geometries obtained using different acquisition and analysis protocols enables the pooling of data between modalities and the potential for smaller studies to leverage large population databases for quantitative comparisons.

2023

Automatic Contrast Generation from Contrastless Computed Tomography

Autores
Domingues, R; Nunes, F; Mancio, J; Fontes Carvalho, R; Coimbra, M; Pedrosa, J; Renna, F;

Publicação
2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC

Abstract
The use of contrast-enhanced computed tomography (CTCA) for detection of coronary artery disease (CAD) exposes patients to the risks of iodine contrast-agents and excessive radiation, increases scanning time and healthcare costs. Deep learning generative models have the potential to artificially create a pseudo-enhanced image from non-contrast computed tomography (CT) scans. In this work, two specific models of generative adversarial networks (GANs) - the Pix2Pix-GAN and the Cycle-GAN - were tested with paired non-contrasted CT and CTCA scans from a private and public dataset. Furthermore, an exploratory analysis of the trade-off of using 2D and 3D inputs and architectures was performed. Using only the Structural Similarity Index Measure (SSIM) and the Peak Signal-to-Noise Ratio (PSNR), it could be concluded that the Pix2Pix-GAN using 2D data reached better results with 0.492 SSIM and 16.375 dB PSNR. However, visual analysis of the output shows significant blur in the generated images, which is not the case for the Cycle-GAN models. This behavior can be captured by the evaluation of the Fr ' echet Inception Distance (FID), that represents a fundamental performance metric that is usually not considered by related works in the literature.

  • 10
  • 11