Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Miguel Coimbra

2021

Standalone performance of artificial intelligence for upper GI neoplasia: a meta-analysis

Autores
Arribas, J; Antonelli, G; Frazzoni, L; Fuccio, L; Ebigbo, A; van der Sommen, F; Ghatwary, N; Palm, C; Coimbra, M; Renna, F; Bergman, JJGHM; Sharma, P; Messmann, H; Hassan, C; Dinis Ribeiro, MJ;

Publicação
GUT

Abstract
Objective Artificial intelligence (AI) may reduce underdiagnosed or overlooked upper GI (UGI) neoplastic and preneoplastic conditions, due to subtle appearance and low disease prevalence. Only disease-specific AI performances have been reported, generating uncertainty on its clinical value. Design We searched PubMed, Embase and Scopus until July 2020, for studies on the diagnostic performance of AI in detection and characterisation of UGI lesions. Primary outcomes were pooled diagnostic accuracy, sensitivity and specificity of AI. Secondary outcomes were pooled positive (PPV) and negative (NPV) predictive values. We calculated pooled proportion rates (%), designed summary receiving operating characteristic curves with respective area under the curves (AUCs) and performed metaregression and sensitivity analysis. Results Overall, 19 studies on detection of oesophageal squamous cell neoplasia (ESCN) or Barrett's esophagus-related neoplasia (BERN) or gastric adenocarcinoma (GCA) were included with 218, 445, 453 patients and 7976, 2340, 13 562 images, respectively. AI-sensitivity/specificity/PPV/NPV/positive likelihood ratio/negative likelihood ratio for UGI neoplasia detection were 90% (CI 85% to 94%)/89% (CI 85% to 92%)/87% (CI 83% to 91%)/91% (CI 87% to 94%)/8.2 (CI 5.7 to 11.7)/0.111 (CI 0.071 to 0.175), respectively, with an overall AUC of 0.95 (CI 0.93 to 0.97). No difference in AI performance across ESCN, BERN and GCA was found, AUC being 0.94 (CI 0.52 to 0.99), 0.96 (CI 0.95 to 0.98), 0.93 (CI 0.83 to 0.99), respectively. Overall, study quality was low, with high risk of selection bias. No significant publication bias was found. Conclusion We found a high overall AI accuracy for the diagnosis of any neoplastic lesion of the UGI tract that was independent of the underlying condition. This may be expected to substantially reduce the miss rate of precancerous lesions and early cancer when implemented in clinical practice.

2021

Joint Training of Hidden Markov Model and Neural Network for Heart Sound Segmentation

Autores
Renna, F; Martins, ML; Coimbra, M;

Publicação
2021 COMPUTING IN CARDIOLOGY (CINC)

Abstract
In this work, we propose a novel algorithm for heart sound segmentation. The proposed approach is based on the combination of two families of state-of-the-art solutions for such problem, hidden Markov models and deep neural networks, in a single training framework. The proposed approach is tested with heart sounds from the PhysioNet dataset and it is shown to achieve an average sensitivity of 93.9% and an average positive predictive value of 94.2% in detecting the boundaries of fundamental heart sounds.

2021

Source Separation of the Second Heart Sound via Alternating Optimization

Autores
Renna, F; Plumbley, MD; Coimbra, M;

Publicação
2021 COMPUTING IN CARDIOLOGY (CINC)

Abstract
A novel algorithm to separate S2 heart sounds into their aortic and pulmonary components is proposed. This approach is based on the assumption that, in different heartbeats of a given recording, aortic and pulmonary components maintain the same waveform but with different relative delays, which are induced by the variation of the thoracic pressure at different respiration phases. The proposed algorithm then retrieves the aortic and pulmonary components as the solution of an optimization problem which is approximated via alternating optimization. The proposed approach is shown to provide reconstructions of aortic and pulmonary components with normalized root mean-squared error consistently below 10% in various operational regimes.

2021

Crackle and wheeze detection in lung sound signals using convolutional neural networks

Autores
Faustino, P; Oliveira, J; Coimbra, M;

Publicação
2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC)

Abstract
Respiratory diseases are among the leading causes of death worldwide. Preventive measures are essential to avoid and increase the odds of a successful recovery. An important screening tool is pulmonary auscultation, an inexpensive, noninvasive and safe method to assess the mechanics and dynamics of the lungs. On the other hand, it is a difficult task for a human listener since some lung sound events have a spectrum of frequencies outside of the human hearing ability. Thus, computer assisted decision systems might play an important role in the detection of abnormal sounds, such as crackle or wheeze sounds. In this paper, we propose a novel system, which is not only able to detect abnormal lung sound events, but it is also able to classify them. Furthermore, our system was trained and tested using the publicly available ICBHI 2017 challenge dataset, and using the metrics proposed by the challenge, thus making our framework and results easily comparable. Using a Mel Spectrogram as an input feature for our convolutional neural network, our system achieved results in line with the current state of the art, an accuracy of 43 %, and a sensitivity of 51%.

2021

Do we really need a segmentation step in heart sound classification algorithms?

Autores
Oliveira, J; Nogueira, D; Renna, F; Ferreira, C; Jorge, AM; Coimbra, M;

Publicação
2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC)

Abstract
Cardiac auscultation is the key screening procedure to detect and identify cardiovascular diseases (CVDs). One of many steps to automatically detect CVDs using auscultation, concerns the detection and delimitation of the heart sound boundaries, a process known as segmentation. Whether to include or not a segmentation step in the signal classification pipeline is nowadays a topic of discussion. Up to our knowledge, the outcome of a segmentation algorithm has been used almost exclusively to align the different signal segments according to the heartbeat. In this paper, the need for a heartbeat alignment step is tested and evaluated over different machine learning algorithms, including deep learning solutions. From the different classifiers tested, Gate Recurrent Unit (GRU) Network and Convolutional Neural Network (CNN) algorithms are shown to be the most robust. Namely, these algorithms can detect the presence of heart murmurs even without a heartbeat alignment step. Furthermore, Support Vector Machine (SVM) and Random Forest (RF) algorithms require an explicit segmentation step to effectively detect heart sounds and murmurs, the overall performance is expected drop approximately 5% on both cases.

2022

The CirCor DigiScope Dataset: From Murmur Detection to Murmur Classification

Autores
Oliveira, J; Renna, F; Costa, PD; Nogueira, M; Oliveira, C; Ferreira, C; Jorge, A; Mattos, S; Hatem, T; Tavares, T; Elola, A; Rad, AB; Sameni, R; Clifford, GD; Coimbra, MT;

Publicação
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS

Abstract
Cardiac auscultation is one of the most cost-effective techniques used to detect and identify many heart conditions. Computer-assisted decision systems based on auscultation can support physicians in their decisions. Unfortunately, the application of such systems in clinical trials is still minimal since most of them only aim to detect the presence of extra or abnormal waves in the phonocardiogram signal, i.e., only a binary ground truth variable (normal vs abnormal) is provided. This is mainly due to the lack of large publicly available datasets, where a more detailed description of such abnormal waves (e.g., cardiac murmurs) exists. To pave the way to more effective research on healthcare recommendation systems based on auscultation, our team has prepared the currently largest pediatric heart sound dataset. A total of 5282 recordings have been collected from the four main auscultation locations of 1568 patients, in the process, 215780 heart sounds have been manually annotated. Furthermore, and for the first time, each cardiac murmur has been manually annotated by an expert annotator according to its timing, shape, pitch, grading, and quality. In addition, the auscultation locations where the murmur is present were identified as well as the auscultation location where the murmur is detected more intensively. Such detailed description for a relatively large number of heart sounds may pave the way for new machine learning algorithms with a real-world application for the detection and analysis of murmur waves for diagnostic purposes.

  • 14
  • 26