Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Aurélio Campilho is Emeritus Professor of the University of Porto, Jubilee Full Professor in the Department of Electrical and Computer Engineering, Faculty of Engineering, University of Porto, Portugal. He is a Fellow from EAMBES (European Alliance of Medical and Biological Engineering and Science). He is a Senior Member of the IEEE – The Institute of Electrical and Electronics Engineers. He is coordinator of the Center for Biomedical Engineering Research (C-BER) and develops research at the Biomedical Imaging Lab from C-BER from INESC TEC – Institute for Systems and Computer Engineering, Technology and Science. His current research interests include the areas of biomedical engineering, medical image analysis, image processing and computer vision, particularly in Computer-aided Diagnosis applied in several imaging modalities, including ophthalmic images, carotid ultrasound imaging and computed tomography of the lung.

He is the author of one book (with two editions), co-edited 20 books and published more than 250 articles in international journals and conferences. Organized several special issues of magazines and conferences. He was Associate Editor of the journals IEEE Transactions on Biomedical Engineering and the Machine Vision Applications Journal. From 2004 to 2020, he was General Chair of the International Conferences on Image Analysis and Recognition (ICIAR) conference series.


Interest
Topics
Details

Details

  • Name

    Aurélio Campilho
  • Role

    Affiliated Researcher
  • Since

    01st January 2014
  • Nationality

    Portugal
  • Contacts

    +351222094106
    aurelio.campilho@inesctec.pt
005
Publications

2024

STERN: Attention-driven Spatial Transformer Network for abnormality detection in chest X-ray images

Authors
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonça, AM;

Publication
ARTIFICIAL INTELLIGENCE IN MEDICINE

Abstract
Chest X-ray scans are frequently requested to detect the presence of abnormalities, due to their low-cost and non-invasive nature. The interpretation of these images can be automated to prioritize more urgent exams through deep learning models, but the presence of image artifacts, e.g. lettering, often generates a harmful bias in the classifiers and an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise, in which an image is either normal or abnormal, using an attention-driven and spatially unsupervised Spatial Transformer Network (STERN), that takes advantage of a novel domain-specific loss to better frame the region of interest. Unlike the state of the art, in which this type of networks is usually employed for image alignment, this work proposes a spatial transformer module that is used specifically for attention, as an alternative to the standard object detection models that typically precede the classifier to crop out the region of interest. In sum, the proposed end-to-end architecture dynamically scales and aligns the input images to maximize the classifier's performance, by selecting the thorax with translation and non-isotropic scaling transformations, and thus eliminating artifacts. Additionally, this paper provides an extensive and objective analysis of the selected regions of interest, by proposing a set of mathematical evaluation metrics. The results indicate that the STERN achieves similar results to using YOLO-cropped images, with reduced computational cost and without the need for localization labels. More specifically, the system is able to distinguish abnormal frontal images from the CheXpert dataset, with a mean AUC of 85.67% -a 2.55% improvement vs. the 0.98% improvement achieved by the YOLO-based counterpart in comparison to a standard baseline classifier. At the same time, the STERN approach requires less than 2/3 of the training parameters, while increasing the inference time per batch in less than 2 ms. Code available via GitHub.

2024

Automated image label extraction from radiology reports - A review

Authors
Pereira, SC; Mendonca, AM; Campilho, A; Sousa, P; Lopes, CT;

Publication
ARTIFICIAL INTELLIGENCE IN MEDICINE

Abstract
Machine Learning models need large amounts of annotated data for training. In the field of medical imaging, labeled data is especially difficult to obtain because the annotations have to be performed by qualified physicians. Natural Language Processing (NLP) tools can be applied to radiology reports to extract labels for medical images automatically. Compared to manual labeling, this approach requires smaller annotation efforts and can therefore facilitate the creation of labeled medical image data sets. In this article, we summarize the literature on this topic spanning from 2013 to 2023, starting with a meta-analysis of the included articles, followed by a qualitative and quantitative systematization of the results. Overall, we found four types of studies on the extraction of labels from radiology reports: those describing systems based on symbolic NLP, statistical NLP, neural NLP, and those describing systems combining or comparing two or more of the latter. Despite the large variety of existing approaches, there is still room for further improvement. This work can contribute to the development of new techniques or the improvement of existing ones.

2024

Towards automatic forecasting of lung nodule diameter with tabular data and CT imaging

Authors
Ferreira, CA; Venkadesh, KV; Jacobs, C; Coimbra, M; Campilho, A;

Publication
Biomed. Signal Process. Control.

Abstract
Objective: This study aims to forecast the progression of lung cancer by estimating the future diameter of lung nodules. Methods: This approach uses as input the tabular data, axial images from tomography scans, and both data types, employing a ResNet50 model for image feature extraction and direct analysis of patient information for tabular data. The data are processed through a neural network before prediction. In the training phase, class weights are assigned based on the rarity of different types of nodules within the dataset, in alignment with nodule management guidelines. Results: Tabular data alone yielded the most accurate results, with a mean absolute deviation of 0.99 mm. For malignant nodules, the best performance, marked by a deviation of 2.82 mm, was achieved using tabular data applying Lung-RADS class weights during training. The tabular data results highlight the influence of using the initial nodule size as an input feature. These results surpass the literature reference of 348-day volume doubling time for malignant nodules. Conclusion: The developed predictive model is optimized for integration into a clinical workflow after detecting, segmenting, and classifying nodules. It provides accurate growth forecasts, establishing a more objective basis for determining follow-up intervals. Significance: With lung cancer's low survival rates, the capacity for precise nodule growth prediction represents a significant breakthrough. This methodology promises to revolutionize patient care and management, enhancing the chances for early risk assessment and effective intervention. © 2024 The Author(s)

2024

A Comparative Study of Feature-Based and End-to-End Approaches for Lung Nodule Classification in CT Volumes to Lung-RADS Follow-up Recommendation

Authors
Ferreira, A; Ramos, I; Coimbra, M; Campilho, A;

Publication
2024 IEEE 22nd Mediterranean Electrotechnical Conference, MELECON 2024

Abstract
Lung cancer represents a significant health concern necessitating diligent monitoring of individuals at risk. While the detection of pulmonary nodules warrants clinical attention, not all cases require immediate surgical intervention, often calling for a strategic approach to follow-up decisions. The LungRADS guideline serves as a cornerstone in clinical practice, furnishing structured recommendations based on various nodule characteristics, including size, calcification, and texture, outlined within established reference tables. However, the reliance on labor-intensive manual measurements underscores the potential advantages of integrating decision support systems into this process. Herein, we propose a feature-based methodology aimed at enhancing clinical decision-making by automating the assessment of nodules in computed tomography scans. Leveraging algorithms tailored for nodule calcification, texture analysis, and segmentation, our approach facilitates the automated classification of follow-up recommendations aligned with Lung-RADS criteria. Comparison with a previously reported end-to-end image-based classification method revealed competitive performance, with the feature-based approach achieving an accuracy of 0.701 ± 0.026, while the end-to-end method attained 0.727 ± 0.020. The inherent explainability of the feature-based approach offers distinct advantages, allowing clinicians to scrutinize and modify individual features to address disagreements or rectify inaccuracies, thereby tailoring follow-up recommendations to patient profiles. © 2024 IEEE.

2024

Distribution-based detection of radiographic changes in pneumonia patterns: A COVID-19 case study

Authors
Pereira, SC; Rocha, J; Campilho, A; Mendonça, AM;

Publication
HELIYON

Abstract
Although the classification of chest radiographs has long been an extensively researched topic, interest increased significantly with the onset of the COVID-19 pandemic. Existing results are promising; however, the radiological similarities between COVID-19 and other types of respiratory diseases limit the success of conventional image classification approaches that focus on single instances. This study proposes a novel perspective that conceptualizes COVID-19 pneumonia as a deviation from a normative distribution of typical pneumonia patterns. Using a population- based approach, our approach utilizes distributional anomaly detection. This method diverges from traditional instance-wise approaches by focusing on sets of scans instead of individual images. Using an autoencoder to extract feature representations, we present instance-based and distribution-based assessments of the separability between COVID-positive and COVIDnegative pneumonia radiographs. The results demonstrate that the proposed distribution-based methodology outperforms conventional instance-based techniques in identifying radiographic changes associated with COVID-positive cases. This underscores its potential as an early warning system capable of detecting significant distributional shifts in radiographic data. By continuously monitoring these changes, this approach offers a mechanism for early identification of emerging health trends, potentially signaling the onset of new pandemics and enabling prompt public health responses.

Supervised
thesis

2022

Collaborative Tools for Lung Cancer Diagnosis in Computed Tomography

Author
Carlos Alexandre Nunes Ferreira

Institution
UP-FEUP

2022

Explainable Artificial Medical Intelligence for Automated Thoracic Pathology Screening

Author
Joana Maria Neves da Rocha

Institution
UP-FEUP

2022

content based image retrieval as a computer aided diagnosis tool for radiologists

Author
José Ricardo Ferreira de Castro Ramos

Institution
UP-FEUP

2022

Computer-aided diagnosis and follow-up of prevalent eye diseases using OCT/OCTA images

Author
Tânia Filipa Fernandes de Melo

Institution
UP-FEUP

2022

Artificial Intelligence-based Decision Support Models for COVID-19 Detection

Author
Sofia Perestrelo de Vasconcelos Cardoso Pereira

Institution
UP-FEUP