OpenAI Launches Advanced AI Model o3 with Self-Verification Capabilities

Düzenleyen: Veronika Nazarova

OpenAI has unveiled its new AI model, o3, the successor to the earlier o1 series. This new series includes o3 and a smaller variant, o3-mini, designed for specific tasks.

The o2 model was skipped to avoid confusion with the UK telecommunications provider O2. While both models are not yet widely available, researchers in computer security can apply for access now, with o3-mini expected to be released to the public by the end of January, followed by o3.

OpenAI employs a new technique called 'thoughtful alignment' to prevent attempts to deceive users. Unlike most AI models, o3 performs self-checks, helping to mitigate common issues faced by AI systems.

This verification process may cause delays, as o3 takes longer—ranging from several seconds to a minute—to arrive at solutions. However, it is generally more reliable in fields such as physics, science, and mathematics.

Trained through reinforcement learning, o3 can 'think' before responding using what OpenAI describes as a 'private thought chain.' After receiving a query, o3 pauses to consider related questions and explains its reasoning before summarizing what it believes to be the most accurate answer.

In terms of performance, o3 has shown significant improvement on the ARC-AGI benchmark, achieving three times better results than o1 in challenging conditions. However, it underperformed on very simple tasks, highlighting fundamental differences from human intelligence.

On other tests, o3 has outperformed its competitors, though external evaluation results from sources outside OpenAI are still awaited.

Bir hata veya yanlışlık buldunuz mu?

Yorumlarınızı en kısa sürede değerlendireceğiz.