Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Joana Rocha started her Integrated Master's in Bioengineering at the University of Porto in 2014, focusing on computer vision and artificial intelligence for biomedical applications. She joined Swansea University to study human motion patterns, developing an automated measurement technique for physical activity assessment. In 2018, she joined INESC-TEC, where she worked on a computer-aided diagnosis system for lung cancer, biometrics for presentation attack detection, and is now working on explainable AI for automated thoracic pathology screening.

Interest
Topics
Details

Details

  • Name

    Joana Maria Rocha
  • Role

    Research Assistant
  • Since

    18th June 2019
  • Nationality

    Portugal
  • Contacts

    +351222094000
    joana.m.rocha@inesctec.pt
Publications

2024

STERN: Attention-driven Spatial Transformer Network for abnormality detection in chest X-ray images

Authors
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonça, AM;

Publication
ARTIFICIAL INTELLIGENCE IN MEDICINE

Abstract
Chest X-ray scans are frequently requested to detect the presence of abnormalities, due to their low-cost and non-invasive nature. The interpretation of these images can be automated to prioritize more urgent exams through deep learning models, but the presence of image artifacts, e.g. lettering, often generates a harmful bias in the classifiers and an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise, in which an image is either normal or abnormal, using an attention-driven and spatially unsupervised Spatial Transformer Network (STERN), that takes advantage of a novel domain-specific loss to better frame the region of interest. Unlike the state of the art, in which this type of networks is usually employed for image alignment, this work proposes a spatial transformer module that is used specifically for attention, as an alternative to the standard object detection models that typically precede the classifier to crop out the region of interest. In sum, the proposed end-to-end architecture dynamically scales and aligns the input images to maximize the classifier's performance, by selecting the thorax with translation and non-isotropic scaling transformations, and thus eliminating artifacts. Additionally, this paper provides an extensive and objective analysis of the selected regions of interest, by proposing a set of mathematical evaluation metrics. The results indicate that the STERN achieves similar results to using YOLO-cropped images, with reduced computational cost and without the need for localization labels. More specifically, the system is able to distinguish abnormal frontal images from the CheXpert dataset, with a mean AUC of 85.67% -a 2.55% improvement vs. the 0.98% improvement achieved by the YOLO-based counterpart in comparison to a standard baseline classifier. At the same time, the STERN approach requires less than 2/3 of the training parameters, while increasing the inference time per batch in less than 2 ms. Code available via GitHub.

2024

Leveraging Longitudinal Data for Cardiomegaly and Change Detection in Chest Radiography

Authors
Belo, R; Rocha, J; Pedrosa, J;

Publication
PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, CIARP 2023, PT I

Abstract
Chest radiography has been widely used for automatic analysis through deep learning (DL) techniques. However, in the manual analysis of these scans, comparison with images at previous time points is commonly done, in order to establish a longitudinal reference. The usage of longitudinal information in automatic analysis is not a common practice, but it might provide relevant information for desired output. In this work, the application of longitudinal information for the detection of cardiomegaly and change in pairs of CXR images was studied. Multiple experiments were performed, where the inclusion of longitudinal information was done at the features level and at the input level. The impact of the alignment of the image pairs (through a developed method) was also studied. The usage of aligned images was revealed to improve the final mcs for both the detection of pathology and change, in comparison to a standard multi-label classifier baseline. The model that uses concatenated image features outperformed the remaining, with an Area Under the Receiver Operating Characteristics Curve (AUC) of 0.858 for change detection, and presenting an AUC of 0.897 for the detection of pathology, showing that pathology features can be used to predict more efficiently the comparison between images. In order to further improve the developed methods, data augmentation techniques were studied. These proved that increasing the representation of minority classes leads to higher noise in the dataset. It also showed that neglecting the temporal order of the images can be an advantageous augmentation technique in longitudinal change studies.

2024

Distribution-based detection of radiographic changes in pneumonia patterns: A COVID-19 case study

Authors
Pereira, SC; Rocha, J; Campilho, A; Mendonça, AM;

Publication
HELIYON

Abstract
Although the classification of chest radiographs has long been an extensively researched topic, interest increased significantly with the onset of the COVID-19 pandemic. Existing results are promising; however, the radiological similarities between COVID-19 and other types of respiratory diseases limit the success of conventional image classification approaches that focus on single instances. This study proposes a novel perspective that conceptualizes COVID-19 pneumonia as a deviation from a normative distribution of typical pneumonia patterns. Using a population- based approach, our approach utilizes distributional anomaly detection. This method diverges from traditional instance-wise approaches by focusing on sets of scans instead of individual images. Using an autoencoder to extract feature representations, we present instance-based and distribution-based assessments of the separability between COVID-positive and COVIDnegative pneumonia radiographs. The results demonstrate that the proposed distribution-based methodology outperforms conventional instance-based techniques in identifying radiographic changes associated with COVID-positive cases. This underscores its potential as an early warning system capable of detecting significant distributional shifts in radiographic data. By continuously monitoring these changes, this approach offers a mechanism for early identification of emerging health trends, potentially signaling the onset of new pandemics and enabling prompt public health responses.

2023

Lightweight multi-scale classification of chest radiographs via size-specific batch normalization

Authors
Pereira, SC; Rocha, J; Campilho, A; Sousa, P; Mendonca, AM;

Publication
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE

Abstract
Background and Objective: Convolutional neural networks are widely used to detect radiological findings in chest radiographs. Standard architectures are optimized for images of relatively small size (for exam-ple, 224 x 224 pixels), which suffices for most application domains. However, in medical imaging, larger inputs are often necessary to analyze disease patterns. A single scan can display multiple types of radi-ological findings varying greatly in size, and most models do not explicitly account for this. For a given network, whose layers have fixed-size receptive fields, smaller input images result in coarser features, which better characterize larger objects in an image. In contrast, larger inputs result in finer grained features, beneficial for the analysis of smaller objects. By compromising to a single resolution, existing frameworks fail to acknowledge that the ideal input size will not necessarily be the same for classifying every pathology of a scan. The goal of our work is to address this shortcoming by proposing a lightweight framework for multi-scale classification of chest radiographs, where finer and coarser features are com-bined in a parameter-efficient fashion. Methods: We experiment on CheXpert, a large chest X-ray database. A lightweight multi-resolution (224 x 224, 4 48 x 4 48 and 896 x 896 pixels) network is developed based on a Densenet-121 model where batch normalization layers are replaced with the proposed size-specific batch normalization. Each input size undergoes batch normalization with dedicated scale and shift parameters, while the remaining parameters are shared across sizes. Additional external validation of the proposed approach is performed on the VinDr-CXR data set. Results: The proposed approach (AUC 83 . 27 +/- 0 . 17 , 7.1M parameters) outperforms standard single-scale models (AUC 81 . 76 +/- 0 . 18 , 82 . 62 +/- 0 . 11 and 82 . 39 +/- 0 . 13 for input sizes 224 x 224, 4 48 x 4 48 and 896 x 896, respectively, 6.9M parameters). It also achieves a performance similar to an ensemble of one individual model per scale (AUC 83 . 27 +/- 0 . 11 , 20.9M parameters), while relying on significantly fewer parameters. The model leverages features of different granularities, resulting in a more accurate classifi-cation of all findings, regardless of their size, highlighting the advantages of this approach. Conclusions: Different chest X-ray findings are better classified at different scales. Our study shows that multi-scale features can be obtained with nearly no additional parameters, boosting performance. (c) 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )

2023

An Active Learning Approach for Support Device Detection in Chest Radiography Images

Authors
Belo, RM; Rocha, J; Mendonca, AM; Campilho, A;

Publication
FIFTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION, ICMV 2022

Abstract
Deep Learning (DL) algorithms allow fast results with high accuracy in medical imaging analysis solutions. However, to achieve a desirable performance, they require large amounts of high quality data. Active Learning (AL) is a subfield of DL that aims for more efficient models requiring ideally fewer data, by selecting the most relevant information for training. CheXpert is a Chest X-Ray (CXR) dataset, containing labels for different pathologic findings, alongside a Support Devices (SD) label. The latter contains several misannotations, which may impact the performance of a pathology detection model. The aim of this work is the detection of SDs in CheXpert CXR images and the comparison of the resulting predictions with the original CheXpert SD annotations, using AL approaches. A subset of 10,220 images was selected, manually annotated for SDs and used in the experimentations. In the first experiment, an initial model was trained on the seed dataset (6,200 images from this subset). The second and third approaches consisted in AL random sampling and least confidence techniques. In both of these, the seed dataset was used initially, and more images were iteratively employed. Finally, in the fourth experiment, a model was trained on the full annotated set. The AL least confidence experiment outperformed the remaining approaches, presenting an AUC of 71.10% and showing that training a model with representative information is favorable over training with all labeled data. This model was used to obtain predictions, which can be useful to limit the use of SD mislabelled images in future models.

Supervised
thesis

2024

Longitudinal Explainability in Chest Radiography Pathology Detection

Author
Raquel Morais Belo

Institution
UP-FEUP

2023

Leveraging Longitudinal Data in Chest Radiography Pathology Detection

Author
Raquel Morais Belo

Institution
UP-FEUP