Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Miguel Coimbra

2023

Cross-Domain Detection of Pulmonary Hypertension in Human and Porcine Heart Sounds

Authors
Gaudio, A; Giordano, N; Coimbra, MT; Kjaergaard, B; Schmidt, SE; Renna, F;

Publication
Computing in Cardiology, CinC 2023, Atlanta, GA, USA, October 1-4, 2023

Abstract

2023

Diagnostic Performance of Deep Learning Models for Gastric Intestinal Metaplasia Detection in Narrow-band Images

Authors
Martins, ML; Pedroso, M; Libânio, D; Dinis Ribeiro, M; Coimbra, M; Renna, F;

Publication
2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC

Abstract
Gastric Intestinal Metaplasia (GIM) is one of the precancerous conditions in the gastric carcinogenesis cascade and its optical diagnosis during endoscopic screening is challenging even for seasoned endoscopists. Several solutions leveraging pre-trained deep neural networks (DNNs) have been recently proposed in order to assist human diagnosis. In this paper, we present a comparative study of these architectures in a new dataset containing GIM and non-GIM Narrow-band imaging still frames. We find that the surveyed DNNs perform remarkably well on average, but still measure sizeable interfold variability during cross-validation. An additional ad-hoc analysis suggests that these baseline architectures may not perform equally well at all scales when diagnosing GIM.

2023

On the Impact of Synchronous Electrocardiogram Signals for Heart Sounds Segmentation

Authors
Silva, A; Teixeira, R; Fontes Carvalho, R; Coimbra, M; Renna, F;

Publication
2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC

Abstract
In this paper we study the heart sound segmentation problem using Deep Neural Networks. The impact of available electrocardiogram (ECG) signals in addition to phonocardiogram (PCG) signals is evaluated. To incorporate ECG, two different models considered, which are built upon a 1D U-net - an early fusion one that fuses ECG in an early processing stage, and a late fusion one that averages the probabilities obtained by two networks applied independently on PCG and ECG data. Results show that, in contrast with traditional uses of ECG for PCG gating, early fusion of PCG and ECG information can provide more robust heart sound segmentation. As a proof of concept, we use the publicly available PhysioNet dataset. Validation results provide, on average, a sensitivity of 97.2%, 94.5%, and 95.6% and a Positive Predictive Value of 97.5%, 96.2%, and 96.1% for Early-fusion, Late-fusion, and unimodal (PCG only) models, respectively, showing the advantages of combining both signals at early stages to segment heart sounds.

2023

Automatic Contrast Generation from Contrastless Computed Tomography

Authors
Domingues, R; Nunes, F; Mancio, J; Fontes Carvalho, R; Coimbra, M; Pedrosa, J; Renna, F;

Publication
2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC

Abstract
The use of contrast-enhanced computed tomography (CTCA) for detection of coronary artery disease (CAD) exposes patients to the risks of iodine contrast-agents and excessive radiation, increases scanning time and healthcare costs. Deep learning generative models have the potential to artificially create a pseudo-enhanced image from non-contrast computed tomography (CT) scans. In this work, two specific models of generative adversarial networks (GANs) - the Pix2Pix-GAN and the Cycle-GAN - were tested with paired non-contrasted CT and CTCA scans from a private and public dataset. Furthermore, an exploratory analysis of the trade-off of using 2D and 3D inputs and architectures was performed. Using only the Structural Similarity Index Measure (SSIM) and the Peak Signal-to-Noise Ratio (PSNR), it could be concluded that the Pix2Pix-GAN using 2D data reached better results with 0.492 SSIM and 16.375 dB PSNR. However, visual analysis of the output shows significant blur in the generated images, which is not the case for the Cycle-GAN models. This behavior can be captured by the evaluation of the Fr ' echet Inception Distance (FID), that represents a fundamental performance metric that is usually not considered by related works in the literature.

2024

Separation of the Aortic and Pulmonary Components of the Second Heart Sound via Alternating Optimization

Authors
Renna, F; Gaudio, A; Mattos, S; Plumbley, MD; Coimbra, MT;

Publication
IEEE ACCESS

Abstract
An algorithm for blind source separation (BSS) of the second heart sound (S2) into aortic and pulmonary components is proposed. It recovers aortic (A2) and pulmonary (P2) waveforms, as well as their relative delays, by solving an alternating optimization problem on the set of S2 sounds, without the use of auxiliary ECG or respiration phase measurement data. This unsupervised and data-driven approach assumes that the A2 and P2 components maintain the same waveform across heartbeats and that the relative delay between onset of the components varies according to respiration phase. The proposed approach is applied to synthetic heart sounds and to real-world heart sounds from 43 patients. It improves over two state-of-the-art BSS approaches by 10% normalized root mean-squared error in the reconstruction of aortic and pulmonary components using synthetic heart sounds, demonstrates robustness to noise, and recovery of splitting delays. The detection of pulmonary hypertension (PH) in a Brazilian population is demonstrated by training a classifier on three scalar features from the recovered A2 and P2 waveforms, and this yields an auROC of 0.76.

  • 25
  • 25