2009
Autores
Lima, S; Cunha, JP; Coimbra, M; Soares, JM;
Publicação
HEALTHINF 2009: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON HEALTH INFORMATICS
Abstract
In all R&D projects there's at least one phase of model verification and accuracy, and when we are working with visual information (such as pictures and video) this phase should be emphasised. When working with medical information and clinical trials the truth of automatic results must be accurate. This work is based on the need of a huge and well annotated dataset of pictures retrieved from endoscopic capsule. This datasets should be used to learn the computer vision algorithms focused on endoscopic capsule video processing, and event detection.
2009
Autores
Hedayioglu, FD; Coimbra, MT; Mattos, SD;
Publicação
HEALTHINF 2009: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON HEALTH INFORMATICS
Abstract
Digital stethoscopes have been drawing the attention of the biomedical engineering community for some time now, as seen from patent applications and scientific publications. In the future, we expect 'intelligent stethoscopes' to assist the clinician in cardiac exam analysis and diagnostic, potentiating functionalities Such as the teaching of auscultation, telemedicine, and personalized healthcare. In this paper we review the most recent heart sound processing publications, discussing their adequacy for implementation in digital stethoscopes. Our results show a body of interesting and promising work, although we identify three important limitations of this research field: lack of a set of universally accepted heart-sound features, badly described experimental methodologies and absence of a clinical validation step. Correcting these flaws is vital for creating convincing next-generation 'intelligent' digital stethoscopes that the medical community can use and trust.
2020
Autores
Oliveira, J; Carvalho, M; Nogueira, DM; Coimbra, MT;
Publicação
CoRR
Abstract
2023
Autores
Ferraz, S; Coimbra, M; Pedrosa, J;
Publicação
2023 IEEE 7TH PORTUGUESE MEETING ON BIOENGINEERING, ENBENG
Abstract
Two-dimensional echocardiography is the most widely used non-invasive imaging modality due to its fast acquisition time, low cost, and high temporal resolution. Accurate segmentation of the left ventricle in echocardiography is vital for ensuring the accuracy of subsequent diagnosis. Currently, numerous efforts have been made to automatize this task and various public datasets have been released in recent decades to further develop present research. However, medical datasets acquired at different institutions have inherent bias caused by various confounding factors, such as operation policies, machine protocols, treatment preference, etc. As a result, models trained on one dataset, regardless of volume, cannot be confidently utilized for the others. In this study, we investigated model robustness to dataset bias using two publicly available echocardiographic datasets. This work validates the efficacy of a supervised deep learning model for left ventricle segmentation and ejection fraction prediction, outside the dataset on which it was trained. The exposure of this model to unseen, but related samples without additional training maintained a good performance. However, a performance decrease from the original results can be observed, while the impact of quality is also noteworthy with lower quality data leading to decreased performance.
2023
Autores
Lima, ACD; de Paiva, LF; Braz, G; de Almeida, JDS; Silva, AC; Coimbra, MT; de Paiva, AC;
Publicação
IEEE ACCESS
Abstract
The gastrointestinal tract is responsible for the entire digestive process. Several diseases, including colorectal cancer, can affect this pathway. Among the deadliest cancers, colorectal cancer is the second most common. It arises from benign tumors in the colon, rectum, and anus. These benign tumors, known as colorectal polyps, can be diagnosed and removed during colonoscopy. Early detection is essential to reduce the risk of cancer. However, approximately 28% of polyps are lost during this examination, mainly because of limitations in diagnostic techniques and image analysis methods. In recent years, computer-aided detection techniques for these lesions have been developed to improve detection quality during periodic examinations. We proposed an automatic method for polyp detection using colonoscopy images. This study presents a two-stage polyp detection method for colonoscopy images using transformers. In the first stage, a saliency map extraction model is supported by the extracted depth maps to identify possible polyp areas. The second stage of the method consists of detecting polyps in the extracted images resulting from the first stage, combined with the green and blue channels. Several experiments were performed using four public colonoscopy datasets. The best results obtained for the polyp detection task were satisfactory, reaching 91% Average Precision in the CVC-ClinicDB dataset, 92% Average Precision in the Kvasir-SEG dataset, and 84% Average Precision in the CVC-ColonDB dataset. This study demonstrates that polyp detection in colonoscopy images can be efficiently performed using a combination of depth maps, salient object-extracted maps, and transformers.
2023
Autores
Nobrega, S; Neto, A; Coimbra, M; Cunha, A;
Publicação
2023 IEEE 7TH PORTUGUESE MEETING ON BIOENGINEERING, ENBENG
Abstract
Gastric Cancer (GC) and Colorectal Cancer (CRC) are some of the most common cancers in the world. The most common diagnostic methods are upper endoscopy and biopsy. Possible expert distractions can lead to late diagnosis. GC is a less studied malignancy than CRC, leading to scarce public data that difficult the use of AI detection methods, unlike CRC where public data are available. Considering that CRC endoscopic images present some similarities with GC, a CRC Transfer Learning approach could be used to improve AI GC detectors. This paper evaluates a novel Transfer Learning approach for real-time GC detection, using a YOLOv4 model pre-trained on CRC detection. The results achieved are promising since GC detection improved relatively to the traditional Transfer Learning strategy.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.