OpenAI Unveils o1: A Leap Towards AI Reasoning and Consciousness

সম্পাদনা করেছেন: Anna 🌎 Krasko

This week, OpenAI launched what its chief executive, Sam Altman, called "the smartest model in the world" -- a generative-AI program whose capabilities are supposedly far greater, and more closely approximate how humans think, than those of any such software preceding it. The start-up has been building toward this moment since September 12, a day that, in OpenAI's telling, set the world on a new path toward superintelligence.

On that day, the company previewed early versions of a series of AI models, known as o1, constructed with novel methods that the start-up believes will propel its programs to unseen heights. Mark Chen, then OpenAI's vice president of research, noted that o1 is fundamentally different from the standard ChatGPT because it can "reason," a hallmark of human intelligence. Shortly thereafter, Altman proclaimed "the dawn of the Intelligence Age," where AI assists humankind in addressing climate change and exploring space.

As of yesterday afternoon, the start-up has released the first complete version of o1, with fully fledged reasoning powers, to the public. However, the latest rhetoric from OpenAI may sound like hype that has built its $157 billion valuation. The exact methods behind OpenAI's chatbot technology remain largely unknown, and o1 is its most secretive release yet. Emily M. Bender, a computational linguist at the University of Washington, described the situation as "a magic trick."

Despite skepticism, several independent researchers have acknowledged that o1 represents a notable departure from older models, indicating "a completely different ballgame" and "genuine improvement." OpenAI has faced controversies and high-profile departures recently, and the pace of model improvement in the AI industry has slowed. Competing products from different companies have become indistinguishable, prompting firms to justify the technology's immense costs.

OpenAI's new reasoning models show significant improvements over other programs in coding, math, and science problems, garnering praise from experts in various fields. However, o1 does not appear to have been designed to excel at word prediction. Investigations suggest that major AI companies are reaching the limits of the technical approach that has driven the AI revolution, with word-predicting models no longer reliably becoming more capable with size.

Mark Chen explained that o1 is different because it is not trained to predict human thoughts but to produce or simulate thoughts independently. This shift aims to address a core gap that previous models faced. The o1 series appears "categorically different" from older GPT models, suggesting a growing body of research on AI reasoning.

OpenAI's reasoning models attempt to navigate statistical models of the world to solve problems, akin to a maze-running rodent, while previous models merely found patterns among data. However, o1 still faces familiar limitations, as it performs better on tasks with more training examples. Critics argue that while o1 can self-query to refine its responses, it remains limited to reapplying what it knows.

OpenAI is taking a long view, asserting that reasoning models explore different hypotheses like humans do. If scaling large language models is indeed hitting a wall, reasoning might be the next frontier for AI development. Other companies, including Google and several Chinese tech firms, are also exploring similar reasoning approaches.

In summary, OpenAI's o1 represents a significant step towards AI reasoning and potential consciousness, but it remains to be seen whether this approach will lead to true superintelligence.

আপনি কি কোনো ত্রুটি বা অসঠিকতা খুঁজে পেয়েছেন?

আমরা আপনার মন্তব্য যত তাড়াতাড়ি সম্ভব বিবেচনা করব।