Download PDFOpen PDF in browserEthical AI Development: Mitigating Bias in Generative ModelsEasyChair Preprint 1410419 pages•Date: July 23, 2024AbstractBias in generative AI models is a critical concern, given the increasing integration of these AI models into various aspects of society. This paper explores comprehensive methodologies for detecting and mitigating bias, emphasizing the importance of fairness and inclusivity in AI systems. By reviewing advanced techniques such as adversarial testing, statistical analysis, and open-set bias detection, the study highlights the multifaceted nature of bias in generative AI. Effective mitigation strategies, including data augmentation, re-sampling, fairness constraints, and post-processing techniques like equalized odds and calibrated equalized odds, are detailed in the paper. The broader implications of these findings for AI development and deployment are significant, particularly in high-stakes applications such as healthcare and law enforcement, where biased models can exacerbate existing inequalities. Despite progress, challenges remain, such as data limitations, algorithmic transparency, and evolving ethical and regulatory landscapes. The study proposes future research directions focusing on advanced detection techniques, intersectional bias analysis, real-world applicability, continuous monitoring, and public engagement. By addressing these areas, the paper aims to contribute to the development of fair, ethical, and socially responsible generative AI systems. Our code is available at - https://github.com/aryan-jadon/Mitigating-Bias-in-Generative-Models Keyphrases: Ethical AI, Fairness in AI, Generative AI, Healthcare AI, Socially Responsible AI, algorithmic transparency, bias detection
|