Meta has launched its Llama 4 series, featuring advanced multimodal AI models capable of understanding text, images, and video. The series includes Llama 4 Scout, known for document summarization with a 10 million token context window, and Llama 4 Maverick, designed for complex tasks using 400 billion parameters. Llama 4 models are designed with native multimodality, incorporating early fusion to seamlessly integrate text and vision tokens into a unified model backbone. Llama 4 Maverick contains 17 billion active parameters, 128 experts, and 400 billion total parameters, offering high quality at a lower price compared to Llama 3.3 70B. Llama 4 Maverick is the best-in-class multimodal model, exceeding comparable models like GPT-4o and Gemini 2.0 on coding, reasoning, multilingual, long-context, and image benchmarks, and it's competitive with the much larger DeepSeek v3.1 on coding and reasoning. Meta is integrating Llama 4 into Meta AI across WhatsApp, Messenger, and Instagram. However, developers and companies based in the European Union are restricted from using the multimodal models due to regulatory uncertainties surrounding the EU AI Act. This restriction does not apply to end users. Companies with over 700 million monthly users need Meta's explicit approval to use Llama 4.
Meta's Llama 4: New Multimodal AI Models with Restrictions for EU Developers
Edited by: Veronika Nazarova
Read more news on this topic:
Did you find an error or inaccuracy?
We will consider your comments as soon as possible.