2022
Authors
Dumont, M; Correia, C; Sauvage, JF; Schwartz, N; Gray, M; Beltramo-Martin, O; Cardoso, J;
Publication
SPACE TELESCOPES AND INSTRUMENTATION 2022: OPTICAL, INFRARED, AND MILLIMETER WAVE
Abstract
For space-based Earth Observations and solar system observations, obtaining both high revisit rates (using a constellation of small platforms) and high angular resolution (using large optics and therefore a large platform) is an asset for many applications. Unfortunately, they prevent the occurrence of each other. A deployable satellite concept has been suggested that could grant both assets by producing jointly high revisit rates and high angular resolution of roughly 1 meter on the ground. This concept relies however on the capacity to maintain the phasing of the segments at a sufficient precision (a few tens of nanometers at visible wavelengths), while undergoing strong and dynamic thermal gradients. In the constrained volume environment of a CubeSat, the system must reuse the scientific images to measure the phasing errors. We address in this paper the key issue of focal-plane wave-front sensing for a segmented pupil using a single image with deep learning. We show a first demonstration of measurement on a point source. The neural network is able to identify properly the phase piston-tip-tilt coefficients below the limit of 15nm per petal.
2022
Authors
Costa, P; Gaudio, A; Campilho, A; Cardoso, JS;
Publication
International Conference on Medical Imaging with Deep Learning, MIDL 2022, 6-8 July 2022, Zurich, Switzerland.
Abstract
Microscopy images have been increasingly analyzed quantitatively in biomedical research. Segmenting individual cell nucleus is an important step as many research studies involve counting cell nuclei and analysing their shape. We propose a novel weakly supervised instance segmentation method trained with image segmentation masks only. Our system comprises two models: an implicit shape Multi-Layer Perceptron (MLP) that learns the shape of the nuclei in canonical coordinates; and 2) an encoder that predicts the parameters of the affine transformation to deform the canonical shape into the correct location, scale, and orientation in the image. To further improve the performance of the model, we propose a loss that uses the total number of nuclei in an image as supervision. Our system is explainable, as the implicit shape MLP learns that the canonical shape of the cell nuclei is a circle, and interpretable as the output of the encoder are parameters of affine transformations. We obtain image segmentation performance close to DeepLabV3 and, additionally, obtain an F1-scoreIoU=0.5 of 68.47% at the instance segmentation task, even though the system was trained with image segmentations. © 2022 P. Costa, A. Gaudio, A. Campilho & J.S. Cardoso.
2022
Authors
Reyes, M; Abreu, PH; Cardoso, JS;
Publication
iMIMIC@MICCAI
Abstract
2022
Authors
Huber, M; Boutros, F; Luu, AT; Raja, K; Ramachandra, R; Damer, N; Neto, PC; Goncalves, T; Sequeira, AF; Cardoso, JS; Tremoco, J; Lourenco, M; Serra, S; Cermeno, E; Ivanovska, M; Batagelj, B; Kronovsek, A; Peer, P; Struc, V;
Publication
2022 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB)
Abstract
This paper presents a summary of the Competition on Face Morphing Attack Detection Based on Privacy-aware Synthetic Training Data (SYN-MAD) held at the 2022 International Joint Conference on Biometrics (IJCB 2022). The competition attracted a total of 12 participating teams, both from academia and industry and present in 11 different countries. In the end, seven valid submissions were submitted by the participating teams and evaluated by the organizers. The competition was held to present and attract solutions that deal with detecting face morphing attacks while protecting people's privacy for ethical and legal reasons. To ensure this, the training data was limited to synthetic data provided by the organizers. The submitted solutions presented innovations that led to outperforming the considered baseline in many experimental settings. The evaluation benchmark is now available at: https://github.com/marcohuber/SYN-MAD-2022.
2022
Authors
Silva, W; Goncalves, T; Harma, K; Schroder, E; Obmann, VC; Barroso, MC; Poellinger, A; Reyes, M; Cardoso, JS;
Publication
SCIENTIFIC REPORTS
Abstract
Currently, radiologists face an excessive workload, which leads to high levels of fatigue, and consequently, to undesired diagnosis mistakes. Decision support systems can be used to prioritize and help radiologists making quicker decisions. In this sense, medical content-based image retrieval systems can be of extreme utility by providing well-curated similar examples. Nonetheless, most medical content-based image retrieval systems work by finding the most similar image, which is not equivalent to finding the most similar image in terms of disease and its severity. Here, we propose an interpretability-driven and an attention-driven medical image retrieval system. We conducted experiments in a large and publicly available dataset of chest radiographs with structured labels derived from free-text radiology reports (MIMIC-CXR-JPG). We evaluated the methods on two common conditions: pleural effusion and (potential) pneumonia. As ground-truth to perform the evaluation, query/test and catalogue images were classified and ordered by an experienced board-certified radiologist. For a profound and complete evaluation, additional radiologists also provided their rankings, which allowed us to infer inter-rater variability, and yield qualitative performance levels. Based on our ground-truth ranking, we also quantitatively evaluated the proposed approaches by computing the normalized Discounted Cumulative Gain (nDCG). We found that the Interpretability-guided approach outperforms the other state-of-the-art approaches and shows the best agreement with the most experienced radiologist. Furthermore, its performance lies within the observed inter-rater variability.
2022
Authors
Costa, P; Fu, Y; Nunes, J; Campilho, A; Cardoso, JS;
Publication
CoRR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.