Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Ana F. Sequeira holds a PhD in Electrical and Computing Engineering obtained from the Engineering Faculty of University of Porto, Portugal in 2015. Ana also holds a Master degree in Mathematical Engineering and a 5-years degree in Mathematics, both obtained from the Mathematics Department of the Science Faculty of the University Of Porto, Portugal.

Ana collaborated as a researcher at INESC TEC, a R&D institute affiliated to the University of Porto, within the Visual Computing and Machine Intelligence Group (VCMI) during her PhD studies.

Ana’s PhD studies, in the fields of computer vision and machine learning, focused on liveness detection techniques for iris and fingerprint. This research equipped Ana with a deep knowledge and diversified skills regarding the complete image processing and classification pipeline: from the pre-processing methods to the classification/decision step passing through the application of feature extraction techniques.

The post-doctoral research was pursued at the University of Reading, UK, collaborating in EU projects related to the application of biometric recognition in Border Control (FASTPASS and PROTECT projects).

This activity was followed by a short term collaboration with the company Iris Guard UK in order to research on the vulnerabilities of EyePay® technology’s to spoofing and to develop a proof-of-concept of an anti-spoofing measure.

Currently, Ana is back at INESC TEC as a Research Assistant.

During Ana’s activity as PhD and PDRA, she authored and co-authored several research publications in major international conferences and journals which attracted, to the date, over 150 citations.

Throughout her research activity, Ana developed expertise not only in computer vision/image processing topics but as well in the application of diversified machine learning techniques, from classic to deep learning methodologies.

Interest
Topics
Details

Details

  • Name

    Ana Filipa Sequeira
  • Role

    Area Manager
  • Since

    23rd February 2011
003
Publications

2023

Unveiling the Two-Faced Truth: Disentangling Morphed Identities for Face Morphing Detection

Authors
Caldeira, E; Neto, PC; Gonçalves, T; Damer, N; Sequeira, AF; Cardoso, JS;

Publication
31st European Signal Processing Conference, EUSIPCO 2023, Helsinki, Finland, September 4-8, 2023

Abstract
Morphing attacks keep threatening biometric systems, especially face recognition systems. Over time they have become simpler to perform and more realistic, as such, the usage of deep learning systems to detect these attacks has grown. At the same time, there is a constant concern regarding the lack of interpretability of deep learning models. Balancing performance and interpretability has been a difficult task for scientists. However, by leveraging domain information and proving some constraints, we have been able to develop IDistill, an interpretable method with state-of-the-art performance that provides information on both the identity separation on morph samples and their contribution to the final prediction. The domain information is learnt by an autoencoder and distilled to a classifier system in order to teach it to separate identity information. When compared to other methods in the literature it outperforms them in three out of five databases and is competitive in the remaining. © 2023 European Signal Processing Conference, EUSIPCO. All rights reserved.

2023

PIC-Score: Probabilistic Interpretable Comparison Score for Optimal Matching Confidence in Single- and Multi-Biometric Face Recognition

Authors
Neto, PC; Sequeira, AF; Cardoso, JS; Terhörst, P;

Publication
IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Workshops, Vancouver, BC, Canada, June 17-24, 2023

Abstract
In the context of biometrics, matching confidence refers to the confidence that a given matching decision is correct. Since many biometric systems operate in critical decision-making processes, such as in forensics investigations, accurately and reliably stating the matching confidence becomes of high importance. Previous works on biometric confidence estimation can well differentiate between high and low confidence, but lack interpretability. Therefore, they do not provide accurate probabilistic estimates of the correctness of a decision. In this work, we propose a probabilistic interpretable comparison (PIC) score that accurately reflects the probability that the score originates from samples of the same identity. We prove that the proposed approach provides optimal matching confidence. Contrary to other approaches, it can also optimally combine multiple samples in a joint PIC score which further increases the recognition and confidence estimation performance. In the experiments, the proposed PIC approach is compared against all biometric confidence estimation methods available on four publicly available databases and five state-of-the-art face recognition systems. The results demonstrate that PIC has a significantly more accurate probabilistic interpretation than similar approaches and is highly effective for multi-biometric recognition. The code is publicly-available1. © 2023 IEEE.

2023

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

Authors
Neto, PC; Caldeira, E; Cardoso, JS; Sequeira, AF;

Publication
International Conference of the Biometrics Special Interest Group, BIOSIG 2023, Darmstadt, Germany, September 20-22, 2023

Abstract

2023

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

Authors
Neto P.C.; Caldeira E.; Cardoso J.S.; Sequeira A.F.;

Publication
BIOSIG 2023 - Proceedings of the 22nd International Conference of the Biometrics Special Interest Group

Abstract
With the ever-growing complexity of deep learning models for face recognition, it becomes hard to deploy these systems in real life. Researchers have two options: 1) use smaller models; 2) compress their current models. Since the usage of smaller models might lead to concerning biases, compression gains relevance. However, compressing might be also responsible for an increase in the bias of the final model. We investigate the overall performance, the performance on each ethnicity subgroup and the racial bias of a State-of-the-Art quantization approach when used with synthetic and real data. This analysis provides a few more details on potential benefits of performing quantization with synthetic data, for instance, the reduction of biases on the majority of test scenarios. We tested five distinct architectures and three different training datasets. The models were evaluated on a fourth dataset which was collected to infer and compare the performance of face recognition models on different ethnicity.

2022

Myope Models - Are face presentation attack detection models short-sighted?

Authors
Neto, PC; Sequeira, AF; Cardoso, JS;

Publication
2022 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW 2022)

Abstract
Presentation attacks are recurrent threats to biometric systems, where impostors attempt to bypass these systems. Humans often use background information as contextual cues for their visual system. Yet, regarding face-based systems, the background is often discarded, since face presentation attack detection (PAD) models are mostly trained with face crops. This work presents a comparative study of face PAD models (including multi-task learning, adversarial training and dynamic frame selection) in two settings: with and without crops. The results show that the performance is consistently better when the background is present in the images. The proposed multi-task methodology beats the state-of-the-art results on the ROSE-Youtu dataset by a large margin with an equal error rate of 0.2%. Furthermore, we analyze the models' predictions with Grad-CAM++ with the aim to investigate to what extent the models focus on background elements that are known to be useful for human inspection. From this analysis we can conclude that the background cues are not relevant across all the attacks. Thus, showing the capability of the model to leverage the background information only when necessary.

Supervised
thesis

2023

Don’t look away! Keeping the human in the loop with an interactive active learning platform

Author
Fábio Manuel Taveira da Cunha

Institution

2023

Explainable Artificial Intelligence – Detecting biases for Interpretable and Fair Face Recognition Deep Learning Models

Author
Ana Dias Teixeira de Viseu Cardoso

Institution

2021

Explainable and Interpretable Face Presentation Attack Detection Methods

Author
Murilo Leite Nóbrega

Institution

2021

Deep Learning Face Emotion Recognition

Author
Pedro Duarte Lopes

Institution

2020

Face biOmetrics UNder severe representation Drifts

Author
Mohsen Saffari

Institution