2020
Autores
Goncalves, T; Silva, W; Cardoso, J;
Publicação
XV MEDITERRANEAN CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING AND COMPUTING - MEDICON 2019
Abstract
Breast cancer is a highly mutable and rapidly evolving disease, with a large worldwide incidence. Even though, it is estimated that approximately 90% of the cases are treatable and curable if detected on early staging and given the best treatment. Nowadays, with the existence of breast cancer routine screening habits, better clinical treatment plans and proper management of the disease, it is possible to treat most cancers with conservative approaches, also known as breast cancer conservative treatments (BCCT). With such a treatment methodology, it is possible to focus on the aesthetic results of the surgery and the patient's Quality of Life, which may influence BCCT outcomes. In the past, this assessment would be done through subjective methods, where a panel of experts would be needed to perform the assessment; however, with the development of computer vision techniques, objective methods, such as BAT (c) and BCCT.core, which perform the assessment based on asymmetry measurements, have been used. On the other hand, they still require information given by the user and none of them has been considered the gold standard for this task. Recently, with the advent of deep learning techniques, algorithms capable of improving the performance of traditional methods on the detection of breast fiducial points (required for asymmetry measurements) have been proposed and showed promising results. There is still, however, a large margin for investigation on how to integrate such algorithms in a complete application, capable of performing an end-to-end classification of the BCCT outcomes. Taking this into account, this thesis shows a comparative study between deep convolutional networks for image segmentation and two different quality-driven keypoint detection architectures for the detection of the breast contour. One that uses a deep learning model that has learned to predict the quality (given by the mean squared error) of an array of keypoints, and, based on this quality, applies the backpropagation algorithm, with gradient descent, to improve them; another which uses a deep learning model which was trained with the quality as a regularization method and that used iterative refinement, in each training step, to improve the quality of the keypoints that were fed into the network. Although none of the methods surpasses the current state of the art, they present promising results for the creation of alternative methodologies to address other regression problems in which the learning of the quality metric may be easier. Following the current trend in the field of web development and with the objective of transferring BCCT.core to an online format, a prototype of a web application for the automatic keypoint detection was developed and is presented in this document. Currently, the user may upload an image and automatically detect and/or manipulate its keypoints. This prototype is completely scalable and can be upgraded with new functionalities according to the user's needs.
2020
Autores
Goncalves, T; Silva, W; Cardoso, MJ; Cardoso, JS;
Publicação
HEALTH AND TECHNOLOGY
Abstract
The implementation of routine breast cancer screening and better treatment strategies made possible to offer to the majority of women the option of breast conservation instead of a mastectomy. The most important aim of breast cancer conservative treatment (BCCT) is to try to optimize aesthetic outcome and implicitly, quality of life (QoL) without jeopardizing local cancer control and overall survival. As a consequence of the impact aesthetic outcome has on QoL, there has been an effort to try to define an optimal tool capable of performing this type of evaluation. Starting from the classical subjective aesthetic evaluation of BCCT (either by the patient herself or by a group of clinicians through questionnaires) to an objective aesthetic evaluation (where machine learning and computer vision methods are employed), leads to less variability and increasing reproducibility of results. Currently, there are some offline software applications available such as BAT(c) and BCCT.core, which perform the assessment based on asymmetry measurements that are computed based on semi-automatically annotated keypoints. In the literature, one can find algorithms that attempt to do the completely automatic keypoint annotation with reasonable success. However, these algorithms are very time-consuming. As the course of research goes more and more into the development of web software applications, these time-consuming tasks are not desirable. In this work, we propose a novel approach to the keypoints detection task treating the problem in part as image segmentation. This novel approach can improve both execution-time and results.
2020
Autores
Oliveira, SP; Pinto, JR; Goncalves, T; Canas Marques, R; Cardoso, MJ; Oliveira, HP; Cardoso, JS;
Publicação
APPLIED SCIENCES-BASEL
Abstract
Human epidermal growth factor receptor 2 (HER2) evaluation commonly requires immunohistochemistry (IHC) tests on breast cancer tissue, in addition to the standard haematoxylin and eosin (H&E) staining tests. Additional costs and time spent on further testing might be avoided if HER2 overexpression could be effectively inferred from H&E stained slides, as a preliminary indication of the IHC result. In this paper, we propose the first method that aims to achieve this goal. The proposed method is based on multiple instance learning (MIL), using a convolutional neural network (CNN) that separately processes H&E stained slide tiles and outputs an IHC label. This CNN is pretrained on IHC stained slide tiles but does not use these data during inference/testing. H&E tiles are extracted from invasive tumour areas segmented with the HASHI algorithm. The individual tile labels are then combined to obtain a single label for the whole slide. The network was trained on slides from the HER2 Scoring Contest dataset (HER2SC) and tested on two disjoint subsets of slides from the HER2SC database and the TCGA-TCIA-BRCA (BRCA) collection. The proposed method attained83.3%classification accuracy on the HER2SC test set and 53.8% on the BRCA test set. Although further efforts should be devoted to achieving improved performance, the obtained results are promising, suggesting that it is possible to perform HER2 overexpression classification on H&E stained tissue slides.
2020
Autores
Sequeira, AF; Silva, W; Pinto, JR; Goncalves, T; Cardoso, JS;
Publicação
2020 8TH INTERNATIONAL WORKSHOP ON BIOMETRICS AND FORENSICS (IWBF 2020)
Abstract
Presentation attack detection (PAD) methods are commonly evaluated using metrics based on the predicted labels. This is a limitation, especially for more elusive methods based on deep learning which can freely learn the most suitable features. Though often being more accurate, these models operate as complex black boxes which makes the inner processes that sustain their predictions still baffling. Interpretability tools are now being used to delve deeper into the operation of machine learning methods, especially artificial networks, to better understand how they reach their decisions. In this paper, we make a case for the integration of interpretability tools in the evaluation of PAD. A simple model for face PAD, based on convolutional neural networks, was implemented and evaluated using both traditional metrics (APCER, BPCER and EER) and interpretability tools (Grad-CAM), using data from the ROSE Youtu video collection. The results show that interpretability tools can capture more completely the intricate behavior of the implemented model, and enable the identification of certain properties that should be verified by a PAD method that is robust, coherent, meaningful, and can adequately generalize to unseen data and attacks. One can conclude that, with further efforts devoted towards higher objectivity in interpretability, this can be the key to obtain deeper and more thorough PAD performance evaluation setups.
2020
Autores
Gonçalves, T; Silva, W; Cardoso, MJ; Cardoso, JS;
Publicação
Proceedings
Abstract
2020
Autores
Pinto, JR; Gonçalves, T; Pinto, C; Sanhudo, L; Fonseca, J; Gonçalves, F; Carvalho, P; Cardoso, JS;
Publicação
4th IEEE International Conference on Image Processing, Applications and Systems, IPAS 2020, Virtual Event, Italy, December 9-11, 2020
Abstract
Despite recent efforts, accuracy in group emotion recognition is still generally low. One of the reasons for these underwhelming performance levels is the scarcity of available labeled data which, like the literature approaches, is mainly focused on still images. In this work, we address this problem by adapting an inflated ResNet-50 pretrained for a similar task, activity recognition, where large labeled video datasets are available. Audio information is processed using a Bidirectional Long Short-Term Memory (Bi-LSTM) network receiving extracted features. A multimodal approach fuses audio and video information at the score level using a support vector machine classifier. Evaluation with data from the EmotiW 2020 AV Group-Level Emotion sub-challenge shows a final test accuracy of 65.74% for the multimodal approach, approximately 18% higher than the official baseline. The results show that using activity recognition pretraining offers performance advantages for group-emotion recognition and that audio is essential to improve the accuracy and robustness of video-based recognition. © 2020 IEEE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.