2023
Authors
Albuquerque, T; Fang, ML; Wiestler, B; Delbridge, C; Vasconcelos, MJM; Cardoso, JS; Schüffler, P;
Publication
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2023 WORKSHOPS
Abstract
The most malignant tumors of the central nervous system are adult-type diffuse gliomas. Historically, glioma subtype classification has been based on morphological features. However, since 2016, WHO recognizes that molecular evaluation is critical for subtyping. Among molecular markers, the mutation status of IDH1 and the codeletion of 1p/19q are crucial for the precise diagnosis of these malignancies. In pathology laboratories, however, manual screening for those markers is time-consuming and susceptible to error. To overcome these limitations, we propose a novel multimodal biomarker classification method that integrates image features derived from brain magnetic resonance imaging and histopathological exams. The proposed model consists of two branches, the first branch takes as input a multi-scale Hematoxylin and Eosin whole slide image, and the second branch uses the pre-segmented region of interest from the magnetic resonance imaging. Both branches are based on convolutional neural networks. After passing the exams by the two embedding branches, the output feature vectors are concatenated, and a multi-layer perceptron is used to classify the glioma biomarkers as a multi-class problem. In this work, several fusion strategies were studied, including a cascade model with mid-fusion; a mid-fusion model, a late fusion model, and a mid-context fusion model. The models were tested using a publicly available data set from The Cancer Genome Atlas. Our cross-validated classification models achieved an area under the curve of 0.874, 0.863, and 0.815 for the proposed multimodal, magnetic resonance imaging, and Hematoxylin and Eosin stain slide images respectively, indicating our multimodal model outperforms its unimodal counterparts and the state-of-the-art glioma biomarker classification methods.
2024
Authors
DeAndres-Tame, I; Tolosana, R; Melzi, P; Vera-Rodriguez, R; Kim, M; Rathgeb, C; Liu, XM; Morales, A; Fierrez, J; Ortega-Garcia, J; Zhong, ZZ; Huang, YG; Mi, YX; Ding, SH; Zhou, SG; He, S; Fu, LZ; Cong, H; Zhang, RY; Xiao, ZH; Smirnov, E; Pimenov, A; Grigorev, A; Timoshenko, D; Asfaw, KM; Low, CY; Liu, H; Wang, CY; Zuo, Q; He, ZX; Shahreza, HO; George, A; Unnervik, A; Rahimi, P; Marcel, E; Neto, PC; Huber, M; Kolf, JN; Damer, N; Boutros, F; Cardoso, JS; Sequeira, AF; Atzori, A; Fenu, G; Marras, M; Struc, V; Yu, J; Li, ZJ; Li, JC; Zhao, WS; Lei, Z; Zhu, XY; Zhang, XY; Biesseck, B; Vidal, P; Coelho, L; Granada, R; Menotti, D;
Publication
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW
Abstract
Synthetic data is gaining increasing relevance for training machine learning models. This is mainly motivated due to several factors such as the lack of real data and intra-class variability, time and errors produced in manual labeling, and in some cases privacy concerns, among others. This paper presents an overview of the 2(nd) edition of the Face Recognition Challenge in the Era of Synthetic Data (FRCSyn) organized at CVPR 2024. FRCSyn aims to investigate the use of synthetic data in face recognition to address current technological limitations, including data privacy concerns, demographic biases, generalization to novel scenarios, and performance constraints in challenging situations such as aging, pose variations, and occlusions. Unlike the 1(st) edition, in which synthetic data from DCFace and GANDiffFace methods was only allowed to train face recognition systems, in this 2(nd) edition we propose new subtasks that allow participants to explore novel face generative methods. The outcomes of the 2(nd) FRCSyn Challenge, along with the proposed experimental protocol and benchmarking contribute significantly to the application of synthetic data to face recognition.
2024
Authors
Dumont, M; Correia, CM; Sauvage, JF; Schwartz, N; Gray, M; Cardoso, J;
Publication
JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION
Abstract
Capturing high-resolution imagery of the Earth's surface often calls for a telescope of considerable size, even from low Earth orbits (LEOs). A large aperture often requires large and expensive platforms. For instance, achieving a resolution of 1 m at visible wavelengths from LEO typically requires an aperture diameter of at least 30 cm. Additionally, ensuring high revisit times often prompts the use of multiple satellites. In light of these challenges, a small, segmented, deployable CubeSat telescope was recently proposed creating the additional need of phasing the telescope's mirrors. Phasing methods on compact platforms are constrained by the limited volume and power available, excluding solutions that rely on dedicated hardware or demand substantial computational resources. Neural networks (NNs) are known for their computationally efficient inference and reduced onboard requirements. Therefore, we developed a NN-based method to measure co-phasing errors inherent to a deployable telescope. The proposed technique demonstrates its ability to detect phasing errors at the targeted performance level [typically a wavefront error (WFE) below 15 nm RMS for a visible imager operating at the diffraction limit] using a point source. The robustness of the NN method is verified in presence of high-order aberrations or noise and the results are compared against existing state-of-the-art techniques. The developed NN model ensures its feasibility and provides arealistic pathway towards achieving diffraction-limited images. (c) 2024 Optica Publishing Group
2024
Authors
Ribeiro, FSF; Garcia, PJV; Silva, M; Cardoso, JS;
Publication
IEEE ACCESS
Abstract
Point source detection algorithms play a pivotal role across diverse applications, influencing fields such as astronomy, biomedical imaging, environmental monitoring, and beyond. This article reviews the algorithms used for space imaging applications from ground and space telescopes. The main difficulties in detection arise from the incomplete knowledge of the impulse function of the imaging system, which depends on the aperture, atmospheric turbulence (for ground-based telescopes), and other factors, some of which are time-dependent. Incomplete knowledge of the impulse function decreases the effectiveness of the algorithms. In recent years, deep learning techniques have been employed to mitigate this problem and have the potential to outperform more traditional approaches. The success of deep learning techniques in object detection has been observed in many fields, and recent developments can further improve the accuracy. However, deep learning methods are still in the early stages of adoption and are used less frequently than traditional approaches. In this review, we discuss the main challenges of point source detection, as well as the latest developments, covering both traditional and current deep learning methods. In addition, we present a comparison between the two approaches to better demonstrate the advantages of each methodology.
2024
Authors
Freitas, N; Montenegro, H; Cardoso, MJ; Cardoso, JS;
Publication
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024
Abstract
Breast cancer locoregional treatment causes alterations to the physical aspect of the breast, often negatively impacting the self-esteem of patients unaware of the possible aesthetic outcomes of those treatments. To improve patients' self-esteem and enable a more informed choice of treatment when multiple options are available, the possibility to predict how the patient might look like after surgery would be of invaluable help. However, no work has been proposed to predict the aesthetic outcomes of breast cancer treatment. As a first step, we compare traditional computer vision and deep learning approaches to reproduce asymmetries of post-operative patients on pre-operative breast images. The results suggest that the traditional approach is better at altering the contour of the breast. In contrast, the deep learning approach succeeds in realistically altering the position and direction of the nipple.
2024
Authors
Rio-Torto, I; Gonçalves, T; Cardoso, JS; Teixeira, LF;
Publication
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024
Abstract
In fields that rely on high-stakes decisions, such as medicine, interpretability plays a key role in promoting trust and facilitating the adoption of deep learning models by the clinical communities. In the medical image analysis domain, gradient-based class activation maps are the most widely used explanation methods and the field lacks a more in depth investigation into inherently interpretable models that focus on integrating knowledge that ensures the model is learning the correct rules. A new approach, B-cos networks, for increasing the interpretability of deep neural networks by inducing weight-input alignment during training showed promising results on natural image classification. In this work, we study the suitability of these B-cos networks to the medical domain by testing them on different use cases (skin lesions, diabetic retinopathy, cervical cytology, and chest X-rays) and conducting a thorough evaluation of several explanation quality assessment metrics. We find that, just like in natural image classification, B-cos explanations yield more localised maps, but it is not clear that they are better than other methods' explanations when considering more explanation properties.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.