Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Ana Filipa Sequeira

2023

PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric Face Recognition

Autores
Neto, PC; Sequeira, AF; Cardoso, JS; Terhörst, P;

Publicação
IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Workshops, Vancouver, BC, Canada, June 17-24, 2023

Abstract
In the context of biometrics, matching confidence refers to the confidence that a given matching decision is correct. Since many biometric systems operate in critical decision-making processes, such as in forensics investigations, accurately and reliably stating the matching confidence becomes of high importance. Previous works on biometric confidence estimation can well differentiate between high and low confidence, but lack interpretability. Therefore, they do not provide accurate probabilistic estimates of the correctness of a decision. In this work, we propose a probabilistic interpretable comparison (PIC) score that accurately reflects the probability that the score originates from samples of the same identity. We prove that the proposed approach provides optimal matching confidence. Contrary to other approaches, it can also optimally combine multiple samples in a joint PIC score which further increases the recognition and confidence estimation performance. In the experiments, the proposed PIC approach is compared against all biometric confidence estimation methods available on four publicly available databases and five state-of-the-art face recognition systems. The results demonstrate that PIC has a significantly more accurate probabilistic interpretation than similar approaches and is highly effective for multi-biometric recognition. The code is publicly-available1. © 2023 IEEE.

2023

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

Autores
Neto, PC; Caldeira, E; Cardoso, JS; Sequeira, AF;

Publicação
International Conference of the Biometrics Special Interest Group, BIOSIG 2023, Darmstadt, Germany, September 20-22, 2023

Abstract

2023

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

Autores
Neto P.C.; Caldeira E.; Cardoso J.S.; Sequeira A.F.;

Publicação
BIOSIG 2023 - Proceedings of the 22nd International Conference of the Biometrics Special Interest Group

Abstract
With the ever-growing complexity of deep learning models for face recognition, it becomes hard to deploy these systems in real life. Researchers have two options: 1) use smaller models; 2) compress their current models. Since the usage of smaller models might lead to concerning biases, compression gains relevance. However, compressing might be also responsible for an increase in the bias of the final model. We investigate the overall performance, the performance on each ethnicity subgroup and the racial bias of a State-of-the-Art quantization approach when used with synthetic and real data. This analysis provides a few more details on potential benefits of performing quantization with synthetic data, for instance, the reduction of biases on the majority of test scenarios. We tested five distinct architectures and three different training datasets. The models were evaluated on a fourth dataset which was collected to infer and compare the performance of face recognition models on different ethnicity.

2019

Adversarial learning for a robust iris presentation attack detection method against unseen attack presentations

Autores
Ferreira, PM; Sequeira, AF; Pernes, D; Rebelo, A; Cardoso, JS;

Publicação
2019 International Conference of the Biometrics Special Interest Group, BIOSIG 2019 - Proceedings

Abstract
Despite the high performance of current presentation attack detection (PAD) methods, the robustness to unseen attacks is still an under addressed challenge. This work approaches the problem by enforcing the learning of the bona fide presentations while making the model less dependent on the presentation attack instrument species (PAIS). The proposed model comprises an encoder, mapping from input features to latent representations, and two classifiers operating on these underlying representations: (i) the task-classifier, for predicting the class labels (as bona fide or attack); and (ii) the species-classifier, for predicting the PAIS. In the learning stage, the encoder is trained to help the task-classifier while trying to fool the species-classifier. Plus, an additional training objective enforcing the similarity of the latent distributions of different species is added leading to a 'PAI-species'-independent model. The experimental results demonstrated that the proposed regularisation strategies equipped the neural network with increased PAD robustness. The adversarial model obtained better loss and accuracy as well as improved error rates in the detection of attack and bona fide presentations. © 2019 Gesellschaft fuer Informatik.

2021

Chairs’ Message - 20<sup>th</sup> anniversary of BIOSIG

Autores
Brömme A.; Busch C.; Damer N.; Dantcheva A.; Gomez-Barrero M.; Raja K.; Rathgeb C.; Sequeira A.F.; Uhl A.;

Publicação
Lecture Notes in Informatics (LNI), Proceedings - Series of the Gesellschaft fur Informatik (GI)

Abstract

2021

Chairs' Message - 20th anniversary of BIOSIG

Autores
Brömme A.; Busch C.; Damer N.; Dantcheva A.; Gomez-Barrero M.; Raja K.; Rathgeb C.; Sequeira A.F.; Uhl A.;

Publicação
BIOSIG 2021 - Proceedings of the 20th International Conference of the Biometrics Special Interest Group

Abstract

  • 7
  • 8