Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Jaime Cardoso

2020

A novel approach to keypoint detection for the aesthetic evaluation of breast cancer surgery outcomes

Autores
Goncalves, T; Silva, W; Cardoso, MJ; Cardoso, JS;

Publicação
HEALTH AND TECHNOLOGY

Abstract
The implementation of routine breast cancer screening and better treatment strategies made possible to offer to the majority of women the option of breast conservation instead of a mastectomy. The most important aim of breast cancer conservative treatment (BCCT) is to try to optimize aesthetic outcome and implicitly, quality of life (QoL) without jeopardizing local cancer control and overall survival. As a consequence of the impact aesthetic outcome has on QoL, there has been an effort to try to define an optimal tool capable of performing this type of evaluation. Starting from the classical subjective aesthetic evaluation of BCCT (either by the patient herself or by a group of clinicians through questionnaires) to an objective aesthetic evaluation (where machine learning and computer vision methods are employed), leads to less variability and increasing reproducibility of results. Currently, there are some offline software applications available such as BAT(c) and BCCT.core, which perform the assessment based on asymmetry measurements that are computed based on semi-automatically annotated keypoints. In the literature, one can find algorithms that attempt to do the completely automatic keypoint annotation with reasonable success. However, these algorithms are very time-consuming. As the course of research goes more and more into the development of web software applications, these time-consuming tasks are not desirable. In this work, we propose a novel approach to the keypoints detection task treating the problem in part as image segmentation. This novel approach can improve both execution-time and results.

2020

Secure Triplet Loss for End-to-End Deep Biometrics

Autores
Pinto, JR; Cardoso, JS; Correia, MV;

Publicação
2020 8TH INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF 2020)

Abstract
Although deep learning is being widely adopted for every topic in pattern recognition, its use for secure and cance-lable biometrics is currently reserved for feature extraction and biometric data preprocessing, limiting achievable performance. In this paper, we propose a novel formulation of the triplet loss methodology, designated as secure triplet loss, that enables biometric template cancelability with end-to-end convolutional neural networks, using easily changeable keys. Trained and evaluated for electrocardiogram-based biometrics, the network revealed easy to optimize using the modified triplet loss and achieved superior performance when compared with the state-of-the-art (10.63% equal error rate with data from 918 subjects of the UofTDB database). Additionally, it ensured biometric template security and effective template cancelability. Although further efforts are needed to avoid template linkability, the proposed secure triplet loss shows promise in template cancelability and non-invertibility for biometric recognition while taking advantage of the full power of convolutional neural networks.

2020

Offline computer -aided diagnosis for Glaucoma detection using fundus images targeted at mobile devices

Autores
Martins, J; Cardoso, JS; Soares, F;

Publicação
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE

Abstract
Background and Objective: Glaucoma, an eye condition that leads to permanent blindness, is typically asymptomatic and therefore difficult to be diagnosed in time. However, if diagnosed in time, Glaucoma can effectively be slowed down by using adequate treatment; hence, an early diagnosis is of utmost importance. Nonetheless, the conventional approaches to diagnose Glaucoma adopt expensive and bulky equipment that requires qualified experts, making it difficult, costly and time-consuming to diagnose large amounts of people. Consequently, new alternatives to diagnose Glaucoma that suppress these issues should be explored. Methods: This work proposes an interpretable computer-aided diagnosis (CAD) pipeline that is capable of diagnosing Glaucoma using fundus images and run offline in mobile devices. Several public datasets of fundus images were merged and used to build Convolutional Neural Networks (CNNs) that perform segmentation and classification tasks. These networks are then used to build a pipeline for Glaucoma assessment that outputs a Glaucoma confidence level and also provides several morphological features and segmentations of relevant structures, resulting in an interpretable Glaucoma diagnosis. To assess the performance of this method in a restricted environment, this pipeline was integrated into a mobile application and time and space complexities were assessed. Results: Considering the test set, the developed pipeline achieved 0.91 and 0.75 of Intersection over Union (IoU) in the optic disc and optic cup segmentation, respectively. With regards to the classification, an accuracy of 0.87 with a sensitivity of 0.85 and an AUC of 0.93 were attained. Moreover, this pipeline runs on an average Android smartphone in under two seconds. Conclusions: The results demonstrate the potential that this method can have in the contribution to an early Glaucoma diagnosis. The proposed approach achieved similar or slightly better metrics than the current CAD systems for Glaucoma assessment while running on more restricted devices. This pipeline can, therefore, be used to construct accurate and affordable CAD systems that could enable large Glaucoma screenings, contributing to an earlier diagnose of this condition. © 2020

2020

Weakly-Supervised Classification of HER2 Expression in Breast Cancer Haematoxylin and Eosin Stained Slides

Autores
Oliveira, SP; Pinto, JR; Goncalves, T; Canas Marques, R; Cardoso, MJ; Oliveira, HP; Cardoso, JS;

Publicação
APPLIED SCIENCES-BASEL

Abstract
Human epidermal growth factor receptor 2 (HER2) evaluation commonly requires immunohistochemistry (IHC) tests on breast cancer tissue, in addition to the standard haematoxylin and eosin (H&E) staining tests. Additional costs and time spent on further testing might be avoided if HER2 overexpression could be effectively inferred from H&E stained slides, as a preliminary indication of the IHC result. In this paper, we propose the first method that aims to achieve this goal. The proposed method is based on multiple instance learning (MIL), using a convolutional neural network (CNN) that separately processes H&E stained slide tiles and outputs an IHC label. This CNN is pretrained on IHC stained slide tiles but does not use these data during inference/testing. H&E tiles are extracted from invasive tumour areas segmented with the HASHI algorithm. The individual tile labels are then combined to obtain a single label for the whole slide. The network was trained on slides from the HER2 Scoring Contest dataset (HER2SC) and tested on two disjoint subsets of slides from the HER2SC database and the TCGA-TCIA-BRCA (BRCA) collection. The proposed method attained83.3%classification accuracy on the HER2SC test set and 53.8% on the BRCA test set. Although further efforts should be devoted to achieving improved performance, the obtained results are promising, suggesting that it is possible to perform HER2 overexpression classification on H&E stained tissue slides.

2020

Interpretable Biometrics: Should We Rethink How Presentation Attack Detection is Evaluated?

Autores
Sequeira, AF; Silva, W; Pinto, JR; Goncalves, T; Cardoso, JS;

Publicação
2020 8TH INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF 2020)

Abstract
Presentation attack detection (PAD) methods are commonly evaluated using metrics based on the predicted labels. This is a limitation, especially for more elusive methods based on deep learning which can freely learn the most suitable features. Though often being more accurate, these models operate as complex black boxes which makes the inner processes that sustain their predictions still baffling. Interpretability tools are now being used to delve deeper into the operation of machine learning methods, especially artificial networks, to better understand how they reach their decisions. In this paper, we make a case for the integration of interpretability tools in the evaluation of PAD. A simple model for face PAD, based on convolutional neural networks, was implemented and evaluated using both traditional metrics (APCER, BPCER and EER) and interpretability tools (Grad-CAM), using data from the ROSE Youtu video collection. The results show that interpretability tools can capture more completely the intricate behavior of the implemented model, and enable the identification of certain properties that should be verified by a PAD method that is robust, coherent, meaningful, and can adequately generalize to unseen data and attacks. One can conclude that, with further efforts devoted towards higher objectivity in interpretability, this can be the key to obtain deeper and more thorough PAD performance evaluation setups.

2020

Deep Image Segmentation for Breast Keypoint Detection

Autores
Gonçalves, T; Silva, W; Cardoso, MJ; Cardoso, JS;

Publicação
Proceedings

Abstract
The main aim of breast cancer conservative treatment is the optimisation of the aesthetic outcome and, implicitly, women’s quality of life, without jeopardising local cancer control and overall survival. Moreover, there has been an effort to try to define an optimal tool capable of performing the aesthetic evaluation of breast cancer conservative treatment outcomes. Recently, a deep learning algorithm that integrates the learning of keypoints’ probability maps in the loss function as a regularisation term for the robust learning of the keypoint localisation has been proposed. However, it achieves the best results when used in cooperation with a shortest-path algorithm that models images as graphs. In this work, we analysed a novel algorithm based on the interaction of deep image segmentation and deep keypoint detection models capable of improving both state-of-the-art performance and execution-time on the breast keypoint detection task.

  • 25
  • 60