Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Ana F. Sequeira é licenciada em Matemática, desde 2002, Mestre em Engenharia Matemática, desde 2007, pela Faculdade de Ciências e doutorada em Engenharia e Eletrotécnica e de Computadores, desde 2015, pela Faculdade de Engenharia, ambas as faculdades da Universidade do Porto.

Ana F. Sequeira colaborou com o INESC TEC como investigadora durante o seu doutoramento que visou as áreas de visão computacional e "machine learning" com foco em metodologias de detecção de vivacidade em íris e impressão digital.

Após a conclusão do doutoramento, Ana F. Sequeira colaborou na Universidade de Reading, UK, em dois projectos europeus relacionados com a aplicação de reconhecimento biométrico em controlo de fronteiras (FASTPASS e PROTECT).

A esta actividade seguiu-se uma colaboração a curto-prazo com a empresa Irisguard UK com o objectivo de pesquisar vulnerabilidades do produto EyePay® e desenvolver um protótipo de uma medida de protecção contra “spoofing attacks”.

Actualmente, Ana F. Sequeira colabora novamente com o INESC TEC como investigadora contratado.

Enquanto doutoranda e pós-doc, desde 2011, Ana F. Sequeira é coautora de vários artigos incluindo conferencias internacionais e revistas reconhecidas pela comunidade por citações; assim como liderou a criação de bases de dados e organização de eventos como competições e eventos.

Ao longo da sua actividade de investigação Ana F. Sequeira adquiriu vasta experiência não apenas em tópicos de visão computacional/processamento de imagem mas também na aplicação de técnicas diversificadas de “machine learning”, desde as metodologias clássicas até as de “deep learning”.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Ana Filipa Sequeira
  • Cargo

    Responsável de Área
  • Desde

    23 fevereiro 2011
003
Publicações

2023

Unveiling the Two-Faced Truth: Disentangling Morphed Identities for Face Morphing Detection

Autores
Caldeira, E; Neto, PC; Gonçalves, T; Damer, N; Sequeira, AF; Cardoso, JS;

Publicação
31st European Signal Processing Conference, EUSIPCO 2023, Helsinki, Finland, September 4-8, 2023

Abstract
Morphing attacks keep threatening biometric systems, especially face recognition systems. Over time they have become simpler to perform and more realistic, as such, the usage of deep learning systems to detect these attacks has grown. At the same time, there is a constant concern regarding the lack of interpretability of deep learning models. Balancing performance and interpretability has been a difficult task for scientists. However, by leveraging domain information and proving some constraints, we have been able to develop IDistill, an interpretable method with state-of-the-art performance that provides information on both the identity separation on morph samples and their contribution to the final prediction. The domain information is learnt by an autoencoder and distilled to a classifier system in order to teach it to separate identity information. When compared to other methods in the literature it outperforms them in three out of five databases and is competitive in the remaining. © 2023 European Signal Processing Conference, EUSIPCO. All rights reserved.

2023

PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric Face Recognition

Autores
Neto, PC; Sequeira, AF; Cardoso, JS; Terhörst, P;

Publicação
IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Workshops, Vancouver, BC, Canada, June 17-24, 2023

Abstract
In the context of biometrics, matching confidence refers to the confidence that a given matching decision is correct. Since many biometric systems operate in critical decision-making processes, such as in forensics investigations, accurately and reliably stating the matching confidence becomes of high importance. Previous works on biometric confidence estimation can well differentiate between high and low confidence, but lack interpretability. Therefore, they do not provide accurate probabilistic estimates of the correctness of a decision. In this work, we propose a probabilistic interpretable comparison (PIC) score that accurately reflects the probability that the score originates from samples of the same identity. We prove that the proposed approach provides optimal matching confidence. Contrary to other approaches, it can also optimally combine multiple samples in a joint PIC score which further increases the recognition and confidence estimation performance. In the experiments, the proposed PIC approach is compared against all biometric confidence estimation methods available on four publicly available databases and five state-of-the-art face recognition systems. The results demonstrate that PIC has a significantly more accurate probabilistic interpretation than similar approaches and is highly effective for multi-biometric recognition. The code is publicly-available1. © 2023 IEEE.

2023

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

Autores
Neto, PC; Caldeira, E; Cardoso, JS; Sequeira, AF;

Publicação
International Conference of the Biometrics Special Interest Group, BIOSIG 2023, Darmstadt, Germany, September 20-22, 2023

Abstract

2023

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

Autores
Neto P.C.; Caldeira E.; Cardoso J.S.; Sequeira A.F.;

Publicação
BIOSIG 2023 - Proceedings of the 22nd International Conference of the Biometrics Special Interest Group

Abstract
With the ever-growing complexity of deep learning models for face recognition, it becomes hard to deploy these systems in real life. Researchers have two options: 1) use smaller models; 2) compress their current models. Since the usage of smaller models might lead to concerning biases, compression gains relevance. However, compressing might be also responsible for an increase in the bias of the final model. We investigate the overall performance, the performance on each ethnicity subgroup and the racial bias of a State-of-the-Art quantization approach when used with synthetic and real data. This analysis provides a few more details on potential benefits of performing quantization with synthetic data, for instance, the reduction of biases on the majority of test scenarios. We tested five distinct architectures and three different training datasets. The models were evaluated on a fourth dataset which was collected to infer and compare the performance of face recognition models on different ethnicity.

2022

Myope Models - Are face presentation attack detection models short-sighted?

Autores
Neto, PC; Sequeira, AF; Cardoso, JS;

Publicação
2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022)

Abstract
Presentation attacks are recurrent threats to biometric systems, where impostors attempt to bypass these systems. Humans often use background information as contextual cues for their visual system. Yet, regarding face-based systems, the background is often discarded, since face presentation attack detection (PAD) models are mostly trained with face crops. This work presents a comparative study of face PAD models (including multi-task learning, adversarial training and dynamic frame selection) in two settings: with and without crops. The results show that the performance is consistently better when the background is present in the images. The proposed multi-task methodology beats the state-of-the-art results on the ROSE-Youtu dataset by a large margin with an equal error rate of 0.2%. Furthermore, we analyze the models' predictions with Grad-CAM++ with the aim to investigate to what extent the models focus on background elements that are known to be useful for human inspection. From this analysis we can conclude that the background cues are not relevant across all the attacks. Thus, showing the capability of the model to leverage the background information only when necessary.

Teses
supervisionadas

2023

Don’t look away! Keeping the human in the loop with an interactive active learning platform

Autor
Fábio Manuel Taveira da Cunha

Instituição

2023

Explainable Artificial Intelligence – Detecting biases for Interpretable and Fair Face Recognition Deep Learning Models

Autor
Ana Dias Teixeira de Viseu Cardoso

Instituição

2021

Explainable and Interpretable Face Presentation Attack Detection Methods

Autor
Murilo Leite Nóbrega

Instituição

2021

Deep Learning Face Emotion Recognition

Autor
Pedro Duarte Lopes

Instituição

2020

Face biOmetrics UNder severe representation Drifts

Autor
Mohsen Saffari

Instituição