Download PDFOpen PDF in browserDomain-Generalized Face Anti-Spoofing with Domain Adaptive Style ExtractionEasyChair Preprint 135426 pages•Date: June 4, 2024AbstractFace anti-spoofing is a important task in securing face recognition systems. In particular, domain generalization for face anti-spoofing has been extensively studied, with the goal of increasing robustness across different datasets and real-world scenarios. Existing methods of domain generalization for face anti-spoofing require re-providing prior information for each new dataset, limiting their applicability. To address this limitation, we introduce a simplified domain-generalized face anti-spoofing (FAS) model that excels in diverse environments without requiring domain-specific modifications. By prioritizing the distinction between textural and non-facial features over conventional facial attributes, our model adapts to various unseen domains, leveraging dynamic kernels and style transfer AdalN for domain-invariant feature extraction. This approach mitigates the model's vulnerability to environmental and attack vector variations, enhancing its generalizability. Our comprehensive evaluation demonstrates the model's superior performance and adaptability, comparing favorably with state-of-the-art methods without the need for predefined domain knowledge or specific attack categorization. The model simplifies the binary classification process between spoof and live samples, showcasing its practical applicability in enhancing biometric security systems. Through this work, we provide valuable insights into domain generalization, offering considerations that are instrumental for future research in face anti-spoofing. Keyphrases: Domain Generalization, Face anti-spoofing, Style extraction
|