Mistral AI Unveils Powerful Open Source Models for Advanced AI Applications
Red Hot Cyber, il blog italiano sulla sicurezza informatica
Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Select Italian
Search
UtiliaCS 320x100
Enterprise BusinessLog 970x120 1
Mistral AI Unveils Powerful Open Source Models for Advanced AI Applications

Mistral AI Unveils Powerful Open Source Models for Advanced AI Applications

Redazione RHC : 3 December 2025 08:32

French company Mistral AI has unveiled its Mistral 3 line of models, making them fully open source under the Apache 2.0 license . The series includes several compact and dense models with 3, 8, and 14 billion parameters , as well as the flagship Mistral Large 3 model. This is a ” Mixture-of-Experts ” model with 41 billion active parameters and 675 billion shared parameters , which the company calls its most powerful solution to date.

Mistral Large 3 was trained from scratch on approximately 3,000 NVIDIA H200 GPUs. After further training, the model achieved the level of the best open-source training models in processing common queries , supported image understanding, and demonstrated excellent multilingual performance, especially for languages other than English and Chinese.

In the LMArena ranking of open source models not specifically designed for complex reasoning, Mistral Large 3 debuted in second place and entered the top ten among all OSS models.

The developers immediately released both the basic and educational versions of Mistral Large 3. A separate version focused on reasoning has been promised, which will be released later. These open versions are intended to serve as a starting point for customization based on business needs, including on the client side.

To simplify implementation, Mistral is collaborating with NVIDIA, vLLM, and Red Hat . A benchmark for Mistral Large 3 is published in NVFP4 format, prepared using the llm-compressor project. This release can run efficiently on Blackwell NVL72 systems, as well as on nodes with 8 A100 or H100 GPUs using vLLM . NVIDIA has added optimized attention kernels and MoEs for the new architecture, support for split prefill and decoding , and, in collaboration with Mistral, implemented speculative decoding. The entire Mistral 3 range is supported in TensorRT-LLM and SGLang , enabling maximum performance at low bit depth and long context.

For edge and local scenarios, Mistral produces the Ministral 3 family. These three models support 3, 8, and 14 billion parameters , each available in basic, instructional, and reasoning versions, all capable of handling images. Thanks to multilingual and multiformat support, they are offered as a universal suite for a variety of business and development needs: from online services to applications running locally or on embedded devices.

Particular emphasis is placed on efficiency. According to Mistral, Ministral 3 demonstrates the best cost-effectiveness among open-source models in its category. Educational versions match and surpass their counterparts in terms of accuracy, while in real-world scenarios they often generate an order of magnitude fewer tokens, reducing latency and costs.

When accuracy is the only important factor, reasoning variants may take longer to compute and produce a more accurate answer. As an example, they cite the Ministral 3 14B , which scored around 85% in its category in the AIME 2025 Olympiad benchmark.

All of these models are designed not only for large data centers, but also for edge systems. NVIDIA offers Ministerial distributions optimized for DGX Spark workstations, RTX- equipped PCs and laptops, and Jetson Orin platforms. This means the same model stack can be used for applications ranging from robotics and smart devices to cloud services.

The Mistral 3 family is already available in Mistral AI Studio , integrated with Amazon Bedrock, Azure Foundry, IBM WatsonX, OpenRouter, Fireworks, and Together AI, and is also available as an open scale in the Mistral Large 3 and Ministral 3 collections on Hugging Face.

Select partners, such as Modal and Unsloth AI , offer ready-to-use solutions for inference and retraining. NVIDIA NIM and AWS SageMaker promise to add support soon.

For companies looking for a solution better suited to their industry challenges and data, Mistral offers training services on custom models . Additionally, detailed technical documentation for several configurations , including Ministral 3 3B-25-12 , Ministral 3 8B-25-12 , Ministral 3 14B-25-12 , and Mistral Large 3 , is available on the website, as well as materials on AI governance and risk in the AI Governance Hub .

That said, artificial intelligence is also moving toward high-performance, open and open-source models. You now have the choice of whether to create internal, interoperable clusters with your own professionals or provide information to OpenAI and Google.

  • AI Applications
  • AI Development
  • AI Models
  • Custom AI Solutions
  • Efficient AI
  • High-Performance Computing
  • machine learning
  • Mistral 3
  • Mistral AI
  • Natural Language Processing
  • Open Source AI
Immagine del sitoRedazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli