Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Jaime Cardoso

2023

A CAD system for automatic dysplasia grading on H&E cervical whole-slide images

Autores
Oliveira, SP; Montezuma, D; Moreira, A; Oliveira, D; Neto, PC; Monteiro, A; Monteiro, J; Ribeiro, L; Goncalves, S; Pinto, IM; Cardoso, JS;

Publicação
SCIENTIFIC REPORTS

Abstract
Cervical cancer is the fourth most common female cancer worldwide and the fourth leading cause of cancer-related death in women. Nonetheless, it is also among the most successfully preventable and treatable types of cancer, provided it is early identified and properly managed. As such, the detection of pre-cancerous lesions is crucial. These lesions are detected in the squamous epithelium of the uterine cervix and are graded as low- or high-grade intraepithelial squamous lesions, known as LSIL and HSIL, respectively. Due to their complex nature, this classification can become very subjective. Therefore, the development of machine learning models, particularly directly on whole-slide images (WSI), can assist pathologists in this task. In this work, we propose a weakly-supervised methodology for grading cervical dysplasia, using different levels of training supervision, in an effort to gather a bigger dataset without the need of having all samples fully annotated. The framework comprises an epithelium segmentation step followed by a dysplasia classifier (non-neoplastic, LSIL, HSIL), making the slide assessment completely automatic, without the need for manual identification of epithelial areas. The proposed classification approach achieved a balanced accuracy of 71.07% and sensitivity of 72.18%, at the slide-level testing on 600 independent samples, which are publicly available upon reasonable request.

2023

A simple machine learning-based framework for faster multi-scale simulations of path-independent materials at large strains

Autores
Carneiro, AMC; Alves, AFC; Coelho, RPC; Cardoso, JS; Pires, FMA;

Publicação
FINITE ELEMENTS IN ANALYSIS AND DESIGN

Abstract
Coupled multi-scale finite element analyses have gained traction over the last years due to the increasing available computational resources. Nevertheless, in the pursuit of accurate results within a reasonable time frame, replacing these high-fidelity micromechanical simulations with reduced-order data-driven models has been explored recently by the modelling community. In this work, two classes of machine learning models are trained for a porous hyperelastic microstructure to predict (i) whether the microscopic equilibrium problem is likely to fail and (ii) the stress-strain response. The former may be used to identify critical macroscopic points where one may fall back to the high-fidelity analysis and possibly apply convergence bowl-widening techniques. For the latter, both a linear regression with polynomial features and artificial Neural Networks have been used, and the required stress-strain derivatives for solving the equilibrium problem have been derived analytically. A weight regularisation is introduced to stabilise the tangent operator and several strategies are discussed for imposing null stresses in undeformed configurations for both regression models. The regression techniques, here analysed exclusively in the context of porous hyperelastic materials, evidence very promising prospects to accelerate multi-scale analyses of solids under large deformation.

2024

Classification of Pulmonary Nodules in 2-[<SUP>18</SUP>F]FDG PET/CT Images with a 3D Convolutional Neural Network

Autores
Alves, VM; Cardoso, JD; Gama, J;

Publicação
NUCLEAR MEDICINE AND MOLECULAR IMAGING

Abstract
Purpose 2-[F-18]FDG PET/CT plays an important role in the management of pulmonary nodules. Convolutional neural networks (CNNs) automatically learn features from images and have the potential to improve the discrimination between malignant and benign pulmonary nodules. The purpose of this study was to develop and validate a CNN model for classification of pulmonary nodules from 2-[F-18]FDG PET images.Methods One hundred thirteen participants were retrospectively selected. One nodule per participant. The 2-[F-18]FDG PET images were preprocessed and annotated with the reference standard. The deep learning experiment entailed random data splitting in five sets. A test set was held out for evaluation of the final model. Four-fold cross-validation was performed from the remaining sets for training and evaluating a set of candidate models and for selecting the final model. Models of three types of 3D CNNs architectures were trained from random weight initialization (Stacked 3D CNN, VGG-like and Inception-v2-like models) both in original and augmented datasets. Transfer learning, from ImageNet with ResNet-50, was also used.Results The final model (Stacked 3D CNN model) obtained an area under the ROC curve of 0.8385 (95% CI: 0.6455-1.0000) in the test set. The model had a sensibility of 80.00%, a specificity of 69.23% and an accuracy of 73.91%, in the test set, for an optimised decision threshold that assigns a higher cost to false negatives.Conclusion A 3D CNN model was effective at distinguishing benign from malignant pulmonary nodules in 2-[F-18]FDG PET images.

2023

OCT Image Synthesis through Deep Generative Models

Autores
Melo, T; Cardoso, J; Carneiro, A; Campilho, A; Mendonça, AM;

Publicação
2023 IEEE 36TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS

Abstract
The development of accurate methods for OCT image analysis is highly dependent on the availability of large annotated datasets. As such datasets are usually expensive and hard to obtain, novel approaches based on deep generative models have been proposed for data augmentation. In this work, a flow-based network (SRFlow) and a generative adversarial network (ESRGAN) are used for synthesizing high-resolution OCT B-scans from low-resolution versions of real OCT images. The quality of the images generated by the two models is assessed using two standard fidelity-oriented metrics and a learned perceptual quality metric. The performance of two classification models trained on real and synthetic images is also evaluated. The obtained results show that the images generated by SRFlow preserve higher fidelity to the ground truth, while the outputs of ESRGAN present, on average, better perceptual quality. Independently of the architecture of the network chosen to classify the OCT B-scans, the model's performance always improves when images generated by SRFlow are included in the training set.

2023

Shining Light on Dark Skin: Pulse Oximetry Correction Models

Autores
Matos, J; Struja, T; Gallifant, J; Charpignon, ML; Cardoso, JS; Celi, LA;

Publicação
2023 IEEE 7TH PORTUGUESE MEETING ON BIOENGINEERING, ENBENG

Abstract
Pulse oximeters are medical devices used to assess peripheral arterial oxygen saturation (SpO(2)) noninvasively. In contrast, the gold standard requires arterial blood to be drawn to measure the arterial oxygen saturation (SaO(2)). Devices currently on the market measure SpO(2) with lower accuracy in populations with darker skin tones. Pulse oximetry inaccuracies can yield episodes of hidden hypoxemia (HH), with SpO(2) >= 88%, but SaO(2) < 88%. HH can result in less treatment and increased mortality. Despite being flawed, pulse oximeters remain ubiquitously used; debiasing models could alleviate the downstream repercussions of HH. To our knowledge, this is the first study to propose such models. Experiments were conducted using the MIMIC-IV dataset. The cohort includes patients admitted to the Intensive Care Unit with paired (SaO(2), SpO(2)) measurements captured within 10min of each other. We built a XGBoost regression predicting SaO(2) from SpO(2), patient demographics, physiological data, and treatment information. We used an asymmetric mean squared error as the loss function to minimize falsely elevated predicted values. The model achieved R-2 = 67.6% among Black patients; frequency of HH episodes was partially mitigated. Respiratory function was most predictive of SaO(2); race-ethnicity was not a top predictor. This singlecenter study shows that SpO(2) corrections can be achieved with Machine Learning. In future, model validation will be performed on additional patient cohorts featuring diverse settings.

2023

Evaluating the ability of an artificial-intelligence cloud-based platform designed to provide information prior to locoregional therapy for breast cancer in improving patient's satisfaction with therapy: The CINDERELLA trial

Autores
Kaidar Person, O; Antunes, M; Cardoso, S; Ciani, O; Cruz, H; Di Micco, R; Gentilini, D; Gonçalves, T; Gouveia, P; Heil, J; Kabata, P; Lopes, D; Martinho, M; Martins, H; Mavioso, C; Mika, M; Montenegro, H; Oliveira, P; Pfob, A; Rotmensz, N; Schinköthe, T; Silva, G; Tarricone, R; Cardoso, M;

Publicação
PLOS ONE

Abstract
BackgroundBreast cancer therapy improved significantly, allowing for different surgical approaches for the same disease stage, therefore offering patients different aesthetic outcomes with similar locoregional control. The purpose of the CINDERELLA trial is to evaluate an artificial-intelligence (AI) cloud-based platform (CINDERELLA platform) vs the standard approach for patient education prior to therapy. MethodsA prospective randomized international multicentre trial comparing two methods for patient education prior to therapy. After institutional ethics approval and a written informed consent, patients planned for locoregional treatment will be randomized to the intervention (CINDERELLA platform) or controls. The patients in the intervention arm will use the newly designed web-application (CINDERELLA platform, CINDERELLA APProach) to access the information related to surgery and/or radiotherapy. Using an AI system, the platform will provide the patient with a picture of her own aesthetic outcome resulting from the surgical procedure she chooses, and an objective evaluation of this aesthetic outcome (e.g., good/fair). The control group will have access to the standard approach. The primary objectives of the trial will be i) to examine the differences between the treatment arms with regards to patients' pre-treatment expectations and the final aesthetic outcomes and ii) in the experimental arm only, the agreement of the pre-treatment AI-evaluation (output) and patient's post-therapy self-evaluation. DiscussionThe project aims to develop an easy-to-use cost-effective AI-powered tool that improves shared decision-making processes. We assume that the CINDERELLA APProach will lead to higher satisfaction, better psychosocial status, and wellbeing of breast cancer patients, and reduce the need for additional surgeries to improve aesthetic outcome.

  • 40
  • 60