Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by João Manuel Pedrosa

2023

Deep Feature-Based Automated Chest Radiography Compliance Assessment

Authors
Costa, M; Pereira, SC; Pedrosa, J; Mendonca, AM; Campilho, A;

Publication
2023 IEEE 7TH PORTUGUESE MEETING ON BIOENGINEERING, ENBENG

Abstract
Chest radiography is one of the most common imaging exams, but its interpretation is often challenging and timeconsuming, which has motivated the development of automated tools for pathology/abnormality detection. Deep learning models trained on large-scale chest X-ray datasets have shown promising results but are highly dependent on the quality of the data. However, these datasets often contain incorrect metadata and non-compliant or corrupted images. These inconsistencies are ultimately incorporated in the training process, impairing the validity of the results. In this study, a novel approach to detect non-compliant images based on deep features extracted from a patient position classification model and a pre-trained VGG16 model are proposed. This method is applied to CheXpert, a widely used public dataset. From a pool of 100 images, it is shown that the deep feature-based methods based on a patient position classification model are able to retrieve a larger number of non-compliant images (up to 81% of non-compliant images), when compared to the same methods but based on a pretrained VGG16 (up to 73%) and the state of the art uncertainty-based method (50%).

2023

Semi-supervised Multi-structure Segmentation in Chest X-Ray Imaging

Authors
Brioso, RC; Pedrosa, J; Mendonça, AM; Campilho, A;

Publication
2023 IEEE 36TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS

Abstract
The importance of X-Ray imaging analysis is paramount for healthcare institutions since it is the main imaging modality for patient diagnosis, and deep learning can be used to aid clinicians in image diagnosis or structure segmentation. In recent years, several articles demonstrate the capability that deep learning models have in classifying and segmenting chest x-ray images if trained in an annotated dataset. Unfortunately, for segmentation tasks, only a few relatively small datasets have annotations, which poses a problem for the training of robust deep learning strategies. In this work, a semi-supervised approach is developed which consists of using available information regarding other anatomical structures to guide the segmentation when the groundtruth segmentation for a given structure is not available. This semi-supervised is compared with a fully-supervised approach for the tasks of lung segmentation and for multi-structure segmentation (lungs, heart and clavicles) in chest x-ray images. The semi-supervised lung predictions are evaluated visually and show relevant improvements, therefore this approach could be used to improve performance in external datasets with missing groundtruth. The multi-structure predictions show an improvement in mean absolute and Hausdorff distances when compared to a fully supervised approach and visual analysis of the segmentations shows that false positive predictions are removed. In conclusion, the developed method results in a new strategy that can help solve the problem of missing annotations and increase the quality of predictions in new datasets.

2023

Automatic Eye-Tracking-Assisted Chest Radiography Pathology Screening

Authors
Santos, R; Pedrosa, J; Mendonça, AM; Campilho, A;

Publication
Pattern Recognition and Image Analysis - 11th Iberian Conference, IbPRIA 2023, Alicante, Spain, June 27-30, 2023, Proceedings

Abstract

2020

Extracting neuronal activity signals from microscopy recordings of contractile tissue: a cell tracking approach using B-spline Explicit Active Surfaces (BEAS)

Authors
Kazwiny, Y; Pedroso, JM; Zhang, Z; Boesmans, W; D'hooge, J; Vanden Berghe, P;

Publication

Abstract
Ca 2+ imaging is a widely used microscopy technique to simultaneously study cellular activity in multiple cells. The desired information consists of cell-specific time series of pixel intensity values, in which the fluorescence intensity represents cellular activity. For static scenes, cellular signal extraction is straightforward, however multiple analysis challenges are present in recordings of contractile tissues, like those of the enteric nervous system (ENS). This layer of critical neurons, embedded within the muscle layers of the gut wall, shows optical overlap between neighboring neurons, intensity changes due to cell activity, and constant movement. These challenges reduce the applicability of classical segmentation techniques and traditional stack alignment and regions-of-interest (ROIs) selection workflows. Therefore, a signal extraction method capable of dealing with moving cells and is insensitive to large intensity changes in consecutive frames is needed. Here we propose a b-spline active contour method to delineate and track neuronal cell bodies based on local and global energy terms. We develop both a single as well as a double-contour approach. The latter takes advantage of the appearance of GCaMP expressing cells, and tracks the nucleus’ boundaries together with the cytoplasmic contour, providing a stable delineation of neighboring, overlapping cells despite movement and intensity changes. The tracked contours can also serve as landmarks to relocate additional and manually-selected ROIs. This improves the total yield of efficacious cell tracking and allows signal extraction from other cell compartments like neuronal processes. Compared to manual delineation and other segmentation methods, the proposed method can track cells during large tissue deformations and high-intensity changes such as during neuronal firing events, while preserving the shape of the extracted Ca 2+ signal. The analysis package represents a significant improvement to available Ca 2+ imaging analysis workflows for ENS recordings and other systems where movement challenges traditional Ca 2+ signal extraction workflows.

2017

Real-time anatomical imaging of the heart on an experimental ultrasound system

Authors
Pedrosa, J; Komini, V; Duchenne, J; D'Hooge, J;

Publication
IEEE International Ultrasonics Symposium, IUS

Abstract
Fast cardiac imaging requires a reduction of the number of transmit events. This is typically achieved through multi-line-transmission (MLT) and/or multi-line-acquisition (MLA) techniques. However, restricting the field-of-view (FOV) to the anatomically relevant domain, e.g. the myocardium, can increase frame rate (FR) further. Using computer simulations, we previously proposed an anatomical scan sequence by performing automatic myocardial segmentation on conventional B-mode images and feeding this information back to the scanner in order to define a fast myocardial scan sequence. The aim of this study was to implement and test this approach experimentally. © 2017 IEEE.

2017

Real-time anatomical imaging of the heart on an experimental ultrasound system

Authors
Pedrosa, J; Komini, V; Duchenne, J; D'Hooge, J;

Publication
IEEE International Ultrasonics Symposium, IUS

Abstract
Fast cardiac imaging requires a reduction of the number of transmit events. This is typically achieved through multiline-transmission and/or multiline-acquisition techniques but restricting the field-of-view to the anatomically relevant domain, e.g. the myocardium, can increase frame rate further. In the present work, an anatomical scan sequence was implemented and tested experimentally by performing real-time segmentation of the myocardium on conventional B-mode and feeding this information back to the scanner in order to define a fast myocardial scan sequence. Ultrasound imaging was performed using HD-PULSE, an experimental fully programmable 256 channel ultrasound system equipped with a 3.5MHz phased array. A univentricular polyvinyl alcohol phantom was connected to a pump to simulate the cardiac cycle to perform in vitro validation of this approach. Three volunteers were also imaged from an apical 4-chamber view to analyse the feasibility of this method in vivo. It is shown that this method is feasible to be applied in real-time and in vivo giving a minimum frame rate gain of 1.5. Although the anatomical image preferably excludes the apical cap of the ventricle, this region is often unanalyzable due to near field clutter anyway. The advantage of this method is that spatial resolution is maintained when compared to conventional ultrasound in contrast to other fast imaging approaches. © 2017 IEEE.

  • 7
  • 11