2024
Authors
DeAndres-Tame, I; Tolosana, R; Melzi, P; Vera-Rodriguez, R; Kim, M; Rathgeb, C; Liu, XM; Morales, A; Fierrez, J; Ortega-Garcia, J; Zhong, ZZ; Huang, YG; Mi, YX; Ding, SH; Zhou, SG; He, S; Fu, LZ; Cong, H; Zhang, RY; Xiao, ZH; Smirnov, E; Pimenov, A; Grigorev, A; Timoshenko, D; Asfaw, KM; Low, CY; Liu, H; Wang, CY; Zuo, Q; He, ZX; Shahreza, HO; George, A; Unnervik, A; Rahimi, P; Marcel, E; Neto, PC; Huber, M; Kolf, JN; Damer, N; Boutros, F; Cardoso, JS; Sequeira, AF; Atzori, A; Fenu, G; Marras, M; Struc, V; Yu, J; Li, ZJ; Li, JC; Zhao, WS; Lei, Z; Zhu, XY; Zhang, XY; Biesseck, B; Vidal, P; Coelho, L; Granada, R; Menotti, D;
Publication
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW
Abstract
Synthetic data is gaining increasing relevance for training machine learning models. This is mainly motivated due to several factors such as the lack of real data and intra-class variability, time and errors produced in manual labeling, and in some cases privacy concerns, among others. This paper presents an overview of the 2(nd) edition of the Face Recognition Challenge in the Era of Synthetic Data (FRCSyn) organized at CVPR 2024. FRCSyn aims to investigate the use of synthetic data in face recognition to address current technological limitations, including data privacy concerns, demographic biases, generalization to novel scenarios, and performance constraints in challenging situations such as aging, pose variations, and occlusions. Unlike the 1(st) edition, in which synthetic data from DCFace and GANDiffFace methods was only allowed to train face recognition systems, in this 2(nd) edition we propose new subtasks that allow participants to explore novel face generative methods. The outcomes of the 2(nd) FRCSyn Challenge, along with the proposed experimental protocol and benchmarking contribute significantly to the application of synthetic data to face recognition.
2024
Authors
Neto, PC; Mamede, RM; Albuquerque, C; Gonçalves, T; Sequeira, AF;
Publication
2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024
Abstract
Face recognition applications have grown in parallel with the size of datasets, complexity of deep learning models and computational power. However, while deep learning models evolve to become more capable and computational power keeps increasing, the datasets available are being retracted and removed from public access. Privacy and ethical concerns are relevant topics within these domains. Through generative artificial intelligence, researchers have put efforts into the development of completely synthetic datasets that can be used to train face recognition systems. Nonetheless, the recent advances have not been sufficient to achieve performance comparable to the state-of-the-art models trained on real data. To study the drift between the performance of models trained on real and synthetic datasets, we leverage a massive attribute classifier (MAC) to create annotations for four datasets: two real and two synthetic. From these annotations, we conduct studies on the distribution of each attribute within all four datasets. Additionally, we further inspect the differences between real and synthetic datasets on the attribute set. When comparing through the Kullback-Leibler divergence we have found differences between real and synthetic samples. Interestingly enough, we have verified that while real samples suffice to explain the synthetic distribution, the opposite could not be further from being true.
2024
Authors
Caldeira, E; Cardoso, JS; Sequeira, AF; Neto, PC;
Publication
CoRR
Abstract
2024
Authors
Mamede, RM; Neto, PC; Sequeira, AF;
Publication
CoRR
Abstract
2024
Authors
Neto, PC; Colakovic, I; Karakatic, S; Sequeira, AF;
Publication
CoRR
Abstract
2024
Authors
Tame, ID; Tolosana, R; Melzi, P; Rodríguez, RV; Kim, M; Rathgeb, C; Liu, X; Gomez, LF; Morales, A; Fiérrez, J; Garcia, JO; Zhong, Z; Huang, Y; Mi, Y; Ding, S; Zhou, S; He, S; Fu, L; Cong, H; Zhang, R; Xiao, Z; Smirnov, E; Pimenov, A; Grigorev, A; Timoshenko, D; Asfaw, KM; Low, CY; Liu, H; Wang, C; Zuo, Q; He, Z; Shahreza, HO; George, A; Unnervik, A; Rahimi, P; Marcel, S; Neto, PC; Huber, M; Kolf, JN; Damer, N; Boutros, F; Cardoso, JS; Sequeira, AF; Atzori, A; Fenu, G; Marras, M; Struc, V; Yu, J; Li, Z; Li, J; Zhao, W; Lei, Z; Zhu, X; Zhang, X; Biesseck, B; Vidal, P; Coelho, L; Granada, R; Menotti, D;
Publication
CoRR
Abstract
Synthetic data is gaining increasing popularity for face recognition technologies, mainly due to the privacy concerns and challenges associated with obtaining real data, including diverse scenarios, quality, and demographic groups, among others. It also offers some advantages over real data, such as the large amount of data that can be generated or the ability to customize it to adapt to specific problem-solving needs. To effectively use such data, face recognition models should also be specifically designed to exploit synthetic data to its fullest potential. In order to promote the proposal of novel Generative AI methods and synthetic data, and investigate the application of synthetic data to better train face recognition systems, we introduce the 2nd FRCSyn-onGoing challenge, based on the 2nd Face Recognition Challenge in the Era of Synthetic Data (FRCSyn), originally launched at CVPR 2024. This is an ongoing challenge that provides researchers with an accessible platform to benchmark (i) the proposal of novel Generative AI methods and synthetic data, and (ii) novel face recognition systems that are specifically proposed to take advantage of synthetic data. We focus on exploring the use of synthetic data both individually and in combination with real data to solve current challenges in face recognition such as demographic bias, domain adaptation, and performance constraints in demanding situations, such as age disparities between training and testing, changes in the pose, or occlusions. Very interesting findings are obtained in this second edition, including a direct comparison with the first one, in which synthetic databases were restricted to DCFace and GANDiffFace. © 2025
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.