A recent study highlights that AI models continue to exhibit gender bias in job recommendations in 2025. Open-source AI models often favor men for high-paying positions, reinforcing gender stereotypes in hiring processes. Researchers are actively exploring mitigation strategies to address these biases and promote fairness.
The study, as reported by The Register on May 2, 2025, examined several mid-sized open-source LLMs, including Llama-3-8B-Instruct and Qwen2.5-7B-Instruct. Researchers prompted the models with job descriptions from a dataset of real job ads, asking them to choose between equally qualified male and female candidates. The findings indicated that most models favored men, especially for higher-wage roles, and reproduced stereotypical gender associations.
To combat this bias, researchers are experimenting with various methods. One approach involves prompting the AI to emulate historical figures, such as Vladimir Lenin, which has shown promise in increasing female callback rates. Experts emphasize the importance of ongoing audits and fine-tuning models to ensure fairness in AI-driven hiring decisions. Addressing AI bias is crucial for creating a more equitable and inclusive labor market in 2025.