Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Francesco Renna
  • Desde

    01 junho 2020
  • Nacionalidade

    Itália
  • Contactos

    +351222094000
    francesco.renna@inesctec.pt
004
Publicações

2024

Separation of the Aortic and Pulmonary Components of the Second Heart Sound via Alternating Optimization

Autores
Renna, F; Gaudio, A; Mattos, S; Plumbley, MD; Coimbra, MT;

Publicação
IEEE ACCESS

Abstract
An algorithm for blind source separation (BSS) of the second heart sound (S2) into aortic and pulmonary components is proposed. It recovers aortic (A2) and pulmonary (P2) waveforms, as well as their relative delays, by solving an alternating optimization problem on the set of S2 sounds, without the use of auxiliary ECG or respiration phase measurement data. This unsupervised and data-driven approach assumes that the A2 and P2 components maintain the same waveform across heartbeats and that the relative delay between onset of the components varies according to respiration phase. The proposed approach is applied to synthetic heart sounds and to real-world heart sounds from 43 patients. It improves over two state-of-the-art BSS approaches by 10% normalized root mean-squared error in the reconstruction of aortic and pulmonary components using synthetic heart sounds, demonstrates robustness to noise, and recovery of splitting delays. The detection of pulmonary hypertension (PH) in a Brazilian population is demonstrated by training a classifier on three scalar features from the recovered A2 and P2 waveforms, and this yields an auROC of 0.76.

2024

Diffusion Model for Generating Synthetic Contrast Enhanced CT from Non-Enhanced Heart Axial CT Images

Autores
Ferreira V.R.S.; de Paiva A.C.; Silva A.C.; de Almeida J.D.S.; Junior G.B.; Renna F.;

Publicação
International Conference on Enterprise Information Systems, ICEIS - Proceedings

Abstract
This work proposes the use of a deep learning-based adversarial diffusion model to address the translation of contrast-enhanced from non-contrast-enhanced computed tomography (CT) images of the heart. The study overcomes challenges in medical image translation by combining concepts from generative adversarial networks (GANs) and diffusion models. Results were evaluated using the Peak signal to noise ratio (PSNR) and structural index similarity (SSIM) to demonstrate the model's effectiveness in generating contrast images while preserving quality and visual similarity. Despite successes, Root Mean Square Error (RMSE) analysis indicates persistent challenges, highlighting the need for continuous improvements. The intersection of GANs and diffusion models promises future advancements, significantly contributing to clinical practice. The table compares CyTran, CycleGAN, and Pix2Pix networks with the proposed model, indicating directions for improvement.

2023

Beyond Heart Murmur Detection: Automatic Murmur Grading From Phonocardiogram

Autores
Elola, A; Aramendi, E; Oliveira, J; Renna, F; Coimbra, MT; Reyna, MA; Sameni, R; Clifford, GD; Rad, AB;

Publicação
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract
Objective: Murmurs are abnormal heart sounds, identified by experts through cardiac auscultation. The murmur grade, a quantitative measure of the murmur intensity, is strongly correlated with the patient's clinical condition. This work aims to estimate each patient's murmur grade (i.e., absent, soft, loud) from multiple auscultation location phonocardiograms (PCGs) of a large population of pediatric patients from a low-resource rural area. Methods: The Mel spectrogram representation of each PCG recording is given to an ensemble of 15 convolutional residual neural networks with channel-wise attention mechanisms to classify each PCG recording. The final murmur grade for each patient is derived based on the proposed decision rule and considering all estimated labels for available recordings. The proposed method is cross-validated on a dataset consisting of 3456 PCG recordings from 1007 patients using a stratified ten-fold cross-validation. Additionally, the method was tested on a hidden test set comprised of 1538 PCG recordings from 442 patients. Results: The overall cross-validation performances for patient-level murmur gradings are 86.3% and 81.6% in terms of the unweighted average of sensitivities and F1-scores, respectively. The sensitivities (and F1-scores) for absent, soft, and loud murmurs are 90.7% (93.6%), 75.8% (66.8%), and 92.3% (84.2%), respectively. On the test set, the algorithm achieves an unweighted average of sensitivities of 80.4% and an F1-score of 75.8%. Conclusions: This study provides a potential approach for algorithmic pre-screening in low-resource settings with relatively high expert screening costs. Significance: The proposed method represents a significant step beyond detection of murmurs, providing characterization of intensity, which may provide an enhanced classification of clinical outcomes.

2023

Markov-Based Neural Networks for Heart Sound Segmentation: Using Domain Knowledge in a Principled Way

Autores
Martins, ML; Coimbra, MT; Renna, F;

Publicação
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract
This work considers the problem of segmenting heart sounds into their fundamental components. We unify statistical and data-driven solutions by introducing Markov-based Neural Networks (MNNs), a hybrid end-to-end framework that exploits Markov models as statistical inductive biases for an Artificial Neural Network (ANN) discriminator. We show that an MNN leveraging a simple one-dimensional Convolutional ANN significantly outperforms two recent purely data-driven solutions for this task in two publicly available datasets: PhysioNet 2016 (Sensitivity: 0.947 +/- 0.02; Positive Predictive Value : 0.937 +/- 0.025) and the CirCor DigiScope 2022 (Sensitivity: 0.950 +/- 0.008; Positive Predictive Value: 0.943 +/- 0.012). We also propose a novel gradient-based unsupervised learning algorithm that effectively makes the MNN adaptive to unseen datum sampled from unknown distributions. We perform a cross dataset analysis and show that an MNN pre-trained in the CirCor DigiScope 2022 can benefit from an average improvement of 3.90% Positive Predictive Value on unseen observations from the PhysioNet 2016 dataset using this method.

2023

Detecting wildlife trafficking in images from online platforms: A test case using deep learning with pangolin images

Autores
Cardoso, AS; Bryukhova, S; Renna, F; Reino, L; Xu, C; Xiao, ZX; Correia, R; Di Minin, E; Ribeiro, J; Vaz, AS;

Publicação
BIOLOGICAL CONSERVATION

Abstract
E-commerce has become a booming market for wildlife trafficking, as online platforms are increasingly more accessible and easier to navigate by sellers, while still lacking adequate supervision. Artificial intelligence models, and specifically deep learning, have been emerging as promising tools for the automated analysis and monitoring of digital online content pertaining to wildlife trade. Here, we used and fine-tuned freely available artificial intelligence models (i.e., convolutional neural networks) to understand the potential of these models to identify instances of wildlife trade. We specifically focused on pangolin species, which are among the most trafficked mammals globally and receiving increasing trade attention since the COVID-19 pandemic. Our convolutional neural networks were trained using online images (available from iNaturalist, Flickr and Google) displaying both traded and non-traded pangolin settings. The trained models showed great performances, being able to identify over 90 % of potential instances of pangolin trade in the considered imagery dataset. These instances included the showcasing of pangolins in popular marketplaces (e.g., wet markets and cages), and the displaying of commonly traded pangolin parts and derivates (e.g., scales) online. Nevertheless, not all instances of pangolin trade could be identified by our models (e.g., in images with dark colours and shaded areas), leaving space for further research developments. The methodological developments and results from this exploratory study represent an advancement in the monitoring of online wildlife trade. Complementing our approach with other forms of online data, such as text, would be a way forward to deliver more robust monitoring tools for online trafficking.

Teses
supervisionadas

2023

Multimodal deep learning for heart sound and electrocardiogram classification

Autor
Hélder Miguel Carvalho Vieira

Instituição
UP-FCUP

2023

Novel deep learning methods for characterization of precancerous tissue in endoscopic narrow band images

Autor
Maria Pedroso da Silva

Instituição
UP-FCUP

2023

Listening for Wolf Conservation: Deep Learning for Automated Howl Recognition  and Classification

Autor
Rafael de Faria Campos

Instituição
UP-FCUP

2023

Deep Learning Algorithms for Anatomical Landmark Detection

Autor
Miguel Lopes Martins

Instituição
UP-FCUP

2022

Automatic contrast generation from contrastless CTs

Autor
Rúben André Dias Domingues

Instituição
UP-FCUP