Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CTM

2022

OCFR 2022: Competition on Occluded Face Recognition From Synthetically Generated Structure-Aware Occlusions

Autores
Neto, PC; Boutros, F; Pinto, JR; Damer, N; Sequeira, AF; Cardoso, JS; Bengherabi, M; Bousnat, A; Boucheta, S; Hebbadj, N; Erakin, ME; Demir, U; Ekenel, HK; Vidal, PBD; Menotti, D;

Publicação
2022 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB)

Abstract
This work summarizes the IJCB Occluded Face Recognition Competition 2022 (IJCB-OCFR-2022) embraced by the 2022 International Joint Conference on Biometrics (IJCB 2022). OCFR-2022 attracted a total of 3 participating teams, from academia. Eventually, six valid submissions were submitted and then evaluated by the organizers. The competition was held to address the challenge of face recognition in the presence of severe face occlusions. The participants were free to use any training data and the testing data was built by the organisers by synthetically occluding parts of the face images using a well-known dataset. The submitted solutions presented innovations and performed very competitively with the considered baseline. A major output of this competition is a challenging, realistic, and diverse, and publicly available occluded face recognition benchmark with well defined evaluation protocols.

2022

Deep learning for space-borne focal-plane wavefront sensing

Autores
Dumont, M; Correia, C; Sauvage, JF; Schwartz, N; Gray, M; Beltramo-Martin, O; Cardoso, J;

Publicação
SPACE TELESCOPES AND INSTRUMENTATION 2022: OPTICAL, INFRARED, AND MILLIMETER WAVE

Abstract
For space-based Earth Observations and solar system observations, obtaining both high revisit rates (using a constellation of small platforms) and high angular resolution (using large optics and therefore a large platform) is an asset for many applications. Unfortunately, they prevent the occurrence of each other. A deployable satellite concept has been suggested that could grant both assets by producing jointly high revisit rates and high angular resolution of roughly 1 meter on the ground. This concept relies however on the capacity to maintain the phasing of the segments at a sufficient precision (a few tens of nanometers at visible wavelengths), while undergoing strong and dynamic thermal gradients. In the constrained volume environment of a CubeSat, the system must reuse the scientific images to measure the phasing errors. We address in this paper the key issue of focal-plane wave-front sensing for a segmented pupil using a single image with deep learning. We show a first demonstration of measurement on a point source. The neural network is able to identify properly the phase piston-tip-tilt coefficients below the limit of 15nm per petal.

2022

Explainable Weakly-Supervised Cell Segmentation by Canonical Shape Learning and Transformation

Autores
Costa, P; Gaudio, A; Campilho, A; Cardoso, JS;

Publicação
International Conference on Medical Imaging with Deep Learning, MIDL 2022, 6-8 July 2022, Zurich, Switzerland.

Abstract
Microscopy images have been increasingly analyzed quantitatively in biomedical research. Segmenting individual cell nucleus is an important step as many research studies involve counting cell nuclei and analysing their shape. We propose a novel weakly supervised instance segmentation method trained with image segmentation masks only. Our system comprises two models: an implicit shape Multi-Layer Perceptron (MLP) that learns the shape of the nuclei in canonical coordinates; and 2) an encoder that predicts the parameters of the affine transformation to deform the canonical shape into the correct location, scale, and orientation in the image. To further improve the performance of the model, we propose a loss that uses the total number of nuclei in an image as supervision. Our system is explainable, as the implicit shape MLP learns that the canonical shape of the cell nuclei is a circle, and interpretable as the output of the encoder are parameters of affine transformations. We obtain image segmentation performance close to DeepLabV3 and, additionally, obtain an F1-scoreIoU=0.5 of 68.47% at the instance segmentation task, even though the system was trained with image segmentations. © 2022 P. Costa, A. Gaudio, A. Campilho & J.S. Cardoso.

2022

Interpretability of Machine Intelligence in Medical Image Computing - 5th International Workshop, iMIMIC 2022, Held in Conjunction with MICCAI 2022, Singapore, Singapore, September 22, 2022, Proceedings

Autores
Reyes, M; Abreu, PH; Cardoso, JS;

Publicação
iMIMIC@MICCAI

Abstract

2022

SYN-MAD 2022: Competition on Face Morphing Attack Detection Based on Privacy-aware Synthetic Training Data

Autores
Huber, M; Boutros, F; Luu, AT; Raja, K; Ramachandra, R; Damer, N; Neto, PC; Goncalves, T; Sequeira, AF; Cardoso, JS; Tremoco, J; Lourenco, M; Serra, S; Cermeno, E; Ivanovska, M; Batagelj, B; Kronovsek, A; Peer, P; Struc, V;

Publicação
2022 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB)

Abstract
This paper presents a summary of the Competition on Face Morphing Attack Detection Based on Privacy-aware Synthetic Training Data (SYN-MAD) held at the 2022 International Joint Conference on Biometrics (IJCB 2022). The competition attracted a total of 12 participating teams, both from academia and industry and present in 11 different countries. In the end, seven valid submissions were submitted by the participating teams and evaluated by the organizers. The competition was held to present and attract solutions that deal with detecting face morphing attacks while protecting people's privacy for ethical and legal reasons. To ensure this, the training data was limited to synthetic data provided by the organizers. The submitted solutions presented innovations that led to outperforming the considered baseline in many experimental settings. The evaluation benchmark is now available at: https://github.com/marcohuber/SYN-MAD-2022.

2022

Computer-aided diagnosis through medical image retrieval in radiology

Autores
Silva, W; Goncalves, T; Harma, K; Schroder, E; Obmann, VC; Barroso, MC; Poellinger, A; Reyes, M; Cardoso, JS;

Publicação
SCIENTIFIC REPORTS

Abstract
Currently, radiologists face an excessive workload, which leads to high levels of fatigue, and consequently, to undesired diagnosis mistakes. Decision support systems can be used to prioritize and help radiologists making quicker decisions. In this sense, medical content-based image retrieval systems can be of extreme utility by providing well-curated similar examples. Nonetheless, most medical content-based image retrieval systems work by finding the most similar image, which is not equivalent to finding the most similar image in terms of disease and its severity. Here, we propose an interpretability-driven and an attention-driven medical image retrieval system. We conducted experiments in a large and publicly available dataset of chest radiographs with structured labels derived from free-text radiology reports (MIMIC-CXR-JPG). We evaluated the methods on two common conditions: pleural effusion and (potential) pneumonia. As ground-truth to perform the evaluation, query/test and catalogue images were classified and ordered by an experienced board-certified radiologist. For a profound and complete evaluation, additional radiologists also provided their rankings, which allowed us to infer inter-rater variability, and yield qualitative performance levels. Based on our ground-truth ranking, we also quantitatively evaluated the proposed approaches by computing the normalized Discounted Cumulative Gain (nDCG). We found that the Interpretability-guided approach outperforms the other state-of-the-art approaches and shows the best agreement with the most experienced radiologist. Furthermore, its performance lies within the observed inter-rater variability.

  • 40
  • 322