OpenAI's experimental AI model has demonstrated significant progress by achieving gold medal-level performance at the 2025 International Mathematical Olympiad (IMO). The model successfully solved five out of six problems, earning a total of 35 out of 42 points, under conditions identical to those of human participants: two 4.5-hour exam sessions without access to external tools or the internet, and providing detailed natural language proofs. This accomplishment underscores advancements in AI's ability to tackle complex mathematical reasoning tasks.
Despite this achievement, the announcement has sparked discussions regarding the ethical implications of AI's role in human-centric competitions. Critics express concerns that such developments may overshadow the efforts of human competitors and question the fairness of AI participation in these events. OpenAI has clarified that the model is an experimental research tool and does not plan to release it with this level of mathematical capability for several months. The company emphasizes that this milestone reflects the rapid advancements in AI's reasoning capabilities and is part of ongoing research into general-purpose reinforcement learning and test-time compute scaling.
While the model's performance is noteworthy, it is important to recognize that the results have not been independently verified by the IMO organizers. The AI's success in this context highlights the potential for AI to contribute to complex problem-solving domains, yet it also raises important questions about the integration of AI into areas traditionally dominated by human expertise.