Paris-based Mistral AI has launched Mistral Small 3.1, a new open-source AI model.
The company claims it outperforms similar models from OpenAI and Google. The model can process text and images using only 24 billion parameters, making it smaller and more efficient. Mistral Small 3.1 offers improved text performance, multimodal processing, and a context window of up to 128,000 tokens. It processes data at speeds of approximately 150 tokens per second.
Mistral AI focuses on algorithmic improvements and optimization techniques to maximize the performance of compact model architectures. This approach aims to make AI more accessible, allowing powerful models to run on smaller devices.
Mistral Small 3.1 is available for download via Hugging Face and accessible via the Mistral API and Google Cloud's Vertex AI. It will soon be available via Nvidia's NIM-microservices and Microsoft Azure AI Foundry.