2023
Authors
Cruz, R; Silva, DTE; Goncalves, T; Carneiro, D; Cardoso, JS;
Publication
SENSORS
Abstract
Semantic segmentation consists of classifying each pixel according to a set of classes. Conventional models spend as much effort classifying easy-to-segment pixels as they do classifying hard-to-segment pixels. This is inefficient, especially when deploying to situations with computational constraints. In this work, we propose a framework wherein the model first produces a rough segmentation of the image, and then patches of the image estimated as hard to segment are refined. The framework is evaluated in four datasets (autonomous driving and biomedical), across four state-of-the-art architectures. Our method accelerates inference time by four, with additional gains for training time, at the cost of some output quality.
2023
Authors
Gouveia, M; Castro, E; Rebelo, A; Cardoso, JS; Patrão, B;
Publication
Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies, BIOSTEC 2023, Volume 4: BIOSIGNALS, Lisbon, Portugal, February 16-18, 2023.
Abstract
2023
Authors
Montezuma, D; Oliveira, SP; Neto, PC; Oliveira, D; Monteiro, A; Cardoso, JS; Macedo-Pinto, I;
Publication
MODERN PATHOLOGY
Abstract
Training machine learning models for artificial intelligence (AI) applications in pathology often requires extensive annotation by human experts, but there is little guidance on the subject. In this work, we aimed to describe our experience and provide a simple, useful, and practical guide addressing annotation strategies for AI development in computational pathology. Annotation methodology will vary significantly depending on the specific study's objectives, but common difficulties will be present across different settings. We summarize key aspects and issue guiding principles regarding team interaction, ground-truth quality assessment, different annotation types, and available software and hardware options and address common difficulties while annotating. This guide was specifically designed for pathology annotation, intending to help pathologists, other researchers, and AI developers with this process.(c) 2022 THE AUTHORS. Published by Elsevier Inc. on behalf of the United States & Canadian Academy of Pathology. This is an open access article under the CC BY-NC-ND license (http://creativecommons. org/licenses/by-nc-nd/4.0/).
2023
Authors
Silva, W; Gonçalves, T; Härmä, K; Schröder, E; Obmann, VC; Barroso, MC; Poellinger, A; Reyes, M; Cardoso, JS;
Publication
Scientific Reports
Abstract
The original version of this Article contained an error in the Acknowledgements section. “This work was partially funded by the Project TAMI—Transparent Artificial Medical Intelligence (NORTE- 01-0247-FEDER-045905) financed by ERDF—European Regional Fund through the North Portugal Regional Operational Program—NORTE 2020 and by the Portuguese Foundation for Science and Technology—FCT under the CMU—Portugal International Partnership, and also by the Portuguese Foundation for Science and Technology—FCT within PhD grants SFRH/BD/139468/2018 and 2020.06434.BD. The authors thank the Swiss National Science Foundation grant number 198388, as well as the Lindenhof foundation for their grant support.” now reads: “This work was supported by National Funds through the Portuguese Funding Agency, FCT–Foundation for Science and Technology Portugal, under Project LA/P/0063/2020, and also by the Portuguese Foundation for Science and Technology - FCT within PhD grants SFRH/BD/139468/2018 and 2020.06434.BD. The authors thank the Swiss National Science Foundation grant number 198388, as well as the Lindenhof foundation for their grant support.” The original Article has been corrected. © The Author(s) 2023.
2023
Authors
Freitas, N; Silva, D; Mavioso, C; Cardoso, MJ; Cardoso, JS;
Publication
BIOENGINEERING-BASEL
Abstract
Breast cancer conservative treatment (BCCT) is a form of treatment commonly used for patients with early breast cancer. This procedure consists of removing the cancer and a small margin of surrounding tissue, while leaving the healthy tissue intact. In recent years, this procedure has become increasingly common due to identical survival rates and better cosmetic outcomes than other alternatives. Although significant research has been conducted on BCCT, there is no gold-standard for evaluating the aesthetic results of the treatment. Recent works have proposed the automatic classification of cosmetic results based on breast features extracted from digital photographs. The computation of most of these features requires the representation of the breast contour, which becomes key to the aesthetic evaluation of BCCT. State-of-the-art methods use conventional image processing tools that automatically detect breast contours based on the shortest path applied to the Sobel filter result in a 2D digital photograph of the patient. However, because the Sobel filter is a general edge detector, it treats edges indistinguishably, i.e., it detects too many edges that are not relevant to breast contour detection and too few weak breast contours. In this paper, we propose an improvement to this method that replaces the Sobel filter with a novel neural network solution to improve breast contour detection based on the shortest path. The proposed solution learns effective representations for the edges between the breasts and the torso wall. We obtain state of the art results on a dataset that was used for developing previous models. Furthermore, we tested these models on a new dataset that contains more variable photographs and show that this new approach shows better generalization capabilities as the previously developed deep models do not perform so well when faced with a different dataset for testing. The main contribution of this paper is to further improve the capabilities of models that perform the objective classification of BCCT aesthetic results automatically by improving upon the current standard technique for detecting breast contours in digital photographs. To that end, the models introduced are simple to train and test on new datasets which makes this approach easily reproducible.
2022
Authors
Silva, W; Goncalves, T; Harma, K; Schroder, E; Obmann, VC; Barroso, MC; Poellinger, A; Reyes, M; Cardoso, JS;
Publication
SCIENTIFIC REPORTS
Abstract
Currently, radiologists face an excessive workload, which leads to high levels of fatigue, and consequently, to undesired diagnosis mistakes. Decision support systems can be used to prioritize and help radiologists making quicker decisions. In this sense, medical content-based image retrieval systems can be of extreme utility by providing well-curated similar examples. Nonetheless, most medical content-based image retrieval systems work by finding the most similar image, which is not equivalent to finding the most similar image in terms of disease and its severity. Here, we propose an interpretability-driven and an attention-driven medical image retrieval system. We conducted experiments in a large and publicly available dataset of chest radiographs with structured labels derived from free-text radiology reports (MIMIC-CXR-JPG). We evaluated the methods on two common conditions: pleural effusion and (potential) pneumonia. As ground-truth to perform the evaluation, query/test and catalogue images were classified and ordered by an experienced board-certified radiologist. For a profound and complete evaluation, additional radiologists also provided their rankings, which allowed us to infer inter-rater variability, and yield qualitative performance levels. Based on our ground-truth ranking, we also quantitatively evaluated the proposed approaches by computing the normalized Discounted Cumulative Gain (nDCG). We found that the Interpretability-guided approach outperforms the other state-of-the-art approaches and shows the best agreement with the most experienced radiologist. Furthermore, its performance lies within the observed inter-rater variability.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.