Google Unveils Gemini 2.0 Flash and Deep Research Features, Marking a New Era in AI Technology

On December 11, 2024, Google announced the launch of Gemini 2.0 Flash, the first model from its next-generation Gemini series. This experimental model is now available in the web-based Gemini app, with plans to integrate it into the smartphone app shortly. Gemini 2.0 Flash is designed for speed and enhances the capabilities of its predecessor, Gemini 1.5 Flash, by introducing multimodal inputs and outputs.

The new model can process text, images, and audio, generating content across these formats. Furthermore, it can utilize tools like Google Search and execute user-defined functions. Developers can access this version through the Gemini API in Google AI Studio and Vertex AI, with broader access expected by January 2025.

Google CEO Sundar Pichai described the launch as a significant advancement in AI, emphasizing the potential for developing new AI agents that bring the company closer to creating a universal assistant.

In addition, Google introduced a feature called Gemini Deep Research, available to advanced subscribers. This feature allows Gemini to create mini clones of itself that scour the web for information based on user prompts, returning with detailed reports that include links to sources. The system operates under the Gemini 1.5 Pro model, employing multiple versions to gather and analyze data efficiently.

Gemini Deep Research aims to assist users in complex research tasks, providing structured reports complete with citations and the ability to refine results upon request. While the feature is currently limited to Google One subscribers, it represents a significant step towards integrating AI agents into mainstream applications.

Heb je een fout of onnauwkeurigheid gevonden?

We zullen je opmerkingen zo snel mogelijk in overweging nemen.