Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Jaime Cardoso

2014

Max-Ordinal Learning

Authors
Domingues, I; Cardoso, JS;

Publication
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Abstract
In predictive modeling tasks, knowledge about the training examples is neither fully complete nor totally incomplete. Unlike semisupervised learning, where one either has perfect knowledge about the label of the point or is completely ignorant about it, here we address a setting where, for each example, we only possess partial information about the label. Each example is described using two (or more) different feature sets or views, where neither are necessarily observed for a given example. If a single view is observed, then the class is only due to that feature set; if more views are present, the observed class label is the maximum of the values corresponding to the individual views. After formalizing this new learning concept, we propose two new learning methodologies that are adapted to this learning paradigm. We also compare their instantiation in experiments with different base models and with conventional methods. The experimental results made both on real and synthetic data sets verify the usefulness of the proposed approaches.

2022

OCFR 2022: Competition on Occluded Face Recognition From Synthetically Generated Structure-Aware Occlusions

Authors
Neto, PC; Boutros, F; Pinto, JR; Damer, N; Sequeira, AF; Cardoso, JS; Bengherabi, M; Bousnat, A; Boucheta, S; Hebbadj, N; Erakin, ME; Demir, U; Ekenel, HK; Vidal, PBD; Menotti, D;

Publication
2022 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB)

Abstract
This work summarizes the IJCB Occluded Face Recognition Competition 2022 (IJCB-OCFR-2022) embraced by the 2022 International Joint Conference on Biometrics (IJCB 2022). OCFR-2022 attracted a total of 3 participating teams, from academia. Eventually, six valid submissions were submitted and then evaluated by the organizers. The competition was held to address the challenge of face recognition in the presence of severe face occlusions. The participants were free to use any training data and the testing data was built by the organisers by synthetically occluding parts of the face images using a well-known dataset. The submitted solutions presented innovations and performed very competitively with the considered baseline. A major output of this competition is a challenging, realistic, and diverse, and publicly available occluded face recognition benchmark with well defined evaluation protocols.

2022

Deep learning for space-borne focal-plane wavefront sensing

Authors
Dumont, M; Correia, C; Sauvage, JF; Schwartz, N; Gray, M; Beltramo-Martin, O; Cardoso, J;

Publication
SPACE TELESCOPES AND INSTRUMENTATION 2022: OPTICAL, INFRARED, AND MILLIMETER WAVE

Abstract
For space-based Earth Observations and solar system observations, obtaining both high revisit rates (using a constellation of small platforms) and high angular resolution (using large optics and therefore a large platform) is an asset for many applications. Unfortunately, they prevent the occurrence of each other. A deployable satellite concept has been suggested that could grant both assets by producing jointly high revisit rates and high angular resolution of roughly 1 meter on the ground. This concept relies however on the capacity to maintain the phasing of the segments at a sufficient precision (a few tens of nanometers at visible wavelengths), while undergoing strong and dynamic thermal gradients. In the constrained volume environment of a CubeSat, the system must reuse the scientific images to measure the phasing errors. We address in this paper the key issue of focal-plane wave-front sensing for a segmented pupil using a single image with deep learning. We show a first demonstration of measurement on a point source. The neural network is able to identify properly the phase piston-tip-tilt coefficients below the limit of 15nm per petal.

2022

Explainable Weakly-Supervised Cell Segmentation by Canonical Shape Learning and Transformation

Authors
Costa, P; Gaudio, A; Campilho, A; Cardoso, JS;

Publication
International Conference on Medical Imaging with Deep Learning, MIDL 2022, 6-8 July 2022, Zurich, Switzerland.

Abstract
Microscopy images have been increasingly analyzed quantitatively in biomedical research. Segmenting individual cell nucleus is an important step as many research studies involve counting cell nuclei and analysing their shape. We propose a novel weakly supervised instance segmentation method trained with image segmentation masks only. Our system comprises two models: an implicit shape Multi-Layer Perceptron (MLP) that learns the shape of the nuclei in canonical coordinates; and 2) an encoder that predicts the parameters of the affine transformation to deform the canonical shape into the correct location, scale, and orientation in the image. To further improve the performance of the model, we propose a loss that uses the total number of nuclei in an image as supervision. Our system is explainable, as the implicit shape MLP learns that the canonical shape of the cell nuclei is a circle, and interpretable as the output of the encoder are parameters of affine transformations. We obtain image segmentation performance close to DeepLabV3 and, additionally, obtain an F1-scoreIoU=0.5 of 68.47% at the instance segmentation task, even though the system was trained with image segmentations. © 2022 P. Costa, A. Gaudio, A. Campilho & J.S. Cardoso.

2019

Insulator visual non-conformity detection in overhead power distribution lines using deep learning

Authors
Morla, RS; Cruz, R; Marotta, AP; Ramos, RP; Simas Filho, EF; Cardoso, JS;

Publication
Comput. Electr. Eng.

Abstract

2019

Editorial

Authors
Carneiro, G; Tavares, JMRS; Bradley, AP; Papa, JP; Nascimento, JC; Cardoso, JS; Lu, Z; Belagiannis, V;

Publication
Comp. Meth. in Biomech. and Biomed. Eng.: Imaging & Visualization

Abstract

  • 36
  • 60