On March 16, 2026, Mistral AI released Mistral Small 4, a model that rewrites the playbook for businesses running AI. For the first time, a single open-source model replaces three separate products: Magistral (reasoning), Pixtral (multimodal vision), and Devstral (agentic coding). The result: less infrastructure, less complexity, and stronger performance.
If you already use AI APIs in your workflows, or you are deciding between multiple specialized models, this guide explains exactly what Mistral Small 4 brings, how it compares to the competition, and when it is worth adopting.
Until now, businesses using the Mistral ecosystem had to juggle multiple models:
Mistral Small 3.2 for standard instruction tasks (chat, classification, extraction)
Magistral for deep reasoning (mathematics, complex planning)
Pixtral for image analysis and visual document processing
Devstral for code agents and automation
Mistral Small 4 merges all of this into a single endpoint. You no longer need to maintain four models, four routing pipelines, and four infrastructure cost lines. One deployment, configurable on-the-fly through the reasoning_effort parameter.
What sets Mistral Small 4 apart is its ability to adapt behavior per request:
`reasoning_effort="none"`: fast responses, Mistral Small 3.2 style. Ideal for everyday chat, classification, data extraction.
`reasoning_effort="high"`: step-by-step reasoning, Magistral-level. Perfect for math, science, or multi-step planning problems.
In practice, this means an enterprise chatbot can handle 90% of questions in fast mode and automatically switch to deep reasoning for the 10% of complex queries, all without changing models.
Mistral Small 4 uses a Mixture-of-Experts (MoE) architecture with 128 total experts. For each token processed, only 4 experts are activated. The model therefore has 119 billion total parameters but only uses 6.5 billion per inference.
Think of it as a pool of specialist consultants: instead of calling every consultant for every question, the system automatically selects the 4 most relevant ones. You benefit from the knowledge of 119 billion parameters at the compute cost of a 6 billion model.
95% compute reduction per token compared to a dense 119B model
Knowledge capacity far exceeding any 6-7B dense model
Expert specialization: some experts activate for code, others for language, others for image analysis
Specification | Value |
|---|---|
Model name | Mistral Small 4 119B A6B |
Architecture | Transformer, Mixture-of-Experts (MoE) |
Total parameters | 119 billion |
Active parameters per token | ~6.5 billion |
Number of experts | 128 (4 active per token) |
Context window | 256,000 tokens |
Inputs | Text + Image (RGB) |
Outputs | Text |
Reasoning mode | Configurable per-request ( |
Function calling | Native (tool use) |
JSON output | Native structured output |
License | Apache 2.0 |
Release date | March 16, 2026 |
Early benchmarks confirm that Mistral Small 4 performs at the level of the best models in its class while producing significantly shorter outputs.
Benchmark | Mistral Small 4 | GPT-4o-mini | Phi-4 (14B) |
|---|---|---|---|
GPQA Diamond | 71.2% | 40.2% | N/A |
MMLU-Pro | 78.0% | 64.8% | N/A |
On LiveCodeBench, Mistral Small 4 outperforms GPT-OSS 120B while producing 20% shorter responses. On the AA LCR test, the model achieves scores comparable to Qwen 3.5-122B but with outputs of 1,600 characters versus 5,800 to 6,100 for Qwen, 3.5 to 4 times less text.
Shorter responses at equal quality directly translate to fewer billed tokens and reduced latency in production.
Feature | Mistral Small 4 | GPT-4o-mini | Phi-4 (14B) | Gemma 3 (27B) | Qwen 3.5-122B |
|---|---|---|---|---|---|
Total parameters | 119B (MoE) | Unknown | 14B | 27B | 122B (MoE) |
Active parameters | ~6.5B | Unknown | 14B | 27B | ~22B |
Context | 256K | 128K | 16K | 128K | 262K |
Vision | Yes | Yes | No | Yes | Yes |
Configurable reasoning | Yes | No | No | No | Yes |
Function calling | Native | Native | Yes | Yes | Yes |
License | Apache 2.0 | Proprietary | MIT | Apache 2.0 | Apache 2.0 |
Local deployment | Multi-GPU | API only | Single GPU | Single GPU | Multi-GPU |
The real competitive advantage of Mistral Small 4 is not just raw performance. It is the unique combination of:
120B-class performance at 6B-class inference cost
256K token context window (double GPT-4o-mini), enabling processing of entire contracts, complete codebases, or lengthy reports in a single request
Apache 2.0 license: no vendor lock-in, no commercial restrictions
On-demand reasoning: no need to pay for reasoning cost on every request
Metric | Improvement |
|---|---|
End-to-end completion time | 40% faster |
Requests per second | 3x more throughput |
These gains come directly from the MoE architecture: despite 5 times more total parameters, only 6.5 billion are active per token (versus 24 billion for Small 3). A workflow handling 100 requests per second on Small 3 could handle 300 on Small 4 with the same infrastructure.
Mistral also provides a companion model for speculative decoding (Mistral-Small-4-119B-2603-eagle), further reducing latency in production.
The simplest option. The model is available through the Mistral API under the identifier mistral-small-latest. Official pricing for Small 4 has not been published yet (as of March 17, 2026), but should fall between Small 3.1 and Medium 3.
Model | Input (per 1M tokens) | Output (per 1M tokens) |
|---|---|---|
Mistral Small 3.1 | $0.10-$0.20 | $0.30-$0.60 |
Mistral Small 4 (estimated) | $0.20-$0.60 | $0.60-$2.00 |
Mistral Medium 3.1 | $0.40 | $2.00 |
Available from day one on NVIDIA NIM, with free access for prototyping. This option is particularly attractive for companies with existing NVIDIA infrastructure, thanks to the NVFP4 checkpoint optimized for H100, H200, and B200 GPUs.
For companies that need to keep data in-house (GDPR compliance, data sovereignty), self-hosted deployment via vLLM is the recommended path. Mistral provides a dedicated Docker image:
docker pull mistralllm/vllm-ms4:latestRequired infrastructure:
Minimum: 4x NVIDIA H100 or 2x H200
Recommended: 4x H100 or 4x H200 for best performance
The Apache 2.0 license means zero API costs: only infrastructure is billable.
A company currently running Mistral Small 3.2 + Magistral + Pixtral can consolidate everything onto a single Small 4 deployment. Less maintenance, lower infrastructure costs, one monitoring point.
With 256,000 tokens of context, Mistral Small 4 can ingest a full contract, a 200-page financial report, or an entire codebase in a single request. This dramatically simplifies RAG pipelines that previously required complex document chunking.
A conversational assistant that responds in fast mode for 90% of questions but automatically activates deep reasoning for complex queries. One model, one endpoint, one bill.
European model (Mistral AI is based in Paris), Apache 2.0 license, self-hostable: no data leaves your infrastructure. This is a strong argument for regulated industries (finance, healthcare, public sector).
Despite its strengths, Mistral Small 4 has several limitations to evaluate:
Demanding infrastructure for self-hosting: 119 billion parameters require at minimum 4 H100 GPUs. This is not a model you will run on a laptop or a single GPU.
llama.cpp support under development: at launch, compatibility with llama.cpp (and therefore Ollama) is not yet finalized. A PR is open on the official repository.
API pricing not yet published: official pricing through the Mistral API was not available at release time. Check the Mistral pricing page for updates.
No lightweight version (Ministral 4): unlike the Small 3 family which offered 3B, 8B, and 14B variants, there is no small companion model yet for use cases requiring lightweight deployment.
Training data not disclosed: Mistral has not published information about the training dataset.
Workaround required for Transformers: the FP8 weight format requires manual conversion to BF16 to work with HuggingFace Transformers.
Mistral Small 4 represents a significant step forward for businesses seeking a versatile, high-performance, and sovereign AI model. The promise of a single model replacing three separate deployments is concrete and verifiable.
If you already use the Mistral ecosystem, the migration is straightforward. If you are evaluating alternatives to GPT-4o-mini for cost, performance, or data sovereignty reasons, Mistral Small 4 deserves serious testing.
The model is available now on Hugging Face, via the Mistral API, and on NVIDIA NIM.

Sin compromiso, precios para ayudarte a aumentar tu prospección.
No necesitas créditos si solo quieres enviar emails o hacer acciones en LinkedIn
Se pueden utilizar para:
Buscar Emails
Acción IA
Buscar Números
Verificar Emails