Enterprises that have been juggling separate models for reasoning, multimodal tasks, and agentic coding may be able to simplify their stack: Mistral’s new Small 4 brings all three into a single open-source model, with adjustable reasoning levels under the hood.
Small 4 enters a crowded field of small models — including Qwen and Claude Haiku — that are competing on inference cost and benchmark performance. Mistral’s pitch: shorter outputs that translate to lower latency and cheaper tokens.
Mistral Small 4 updates Mistral Small 3.2, which came out in June 2025, and is available under an Apache 2.0 license. “With Small 4, users no longer need to choose between a fast instruct model, a powerful reasoning engine, or a multimodal assistant: one model now delivers all three, with configurable reasoning effort and best-in-class efficiency,” Mistral said in a blog post.
The company said that despite its smaller size — Mistral Small 4 has 119 billion total parameters with only 6 billion active parameters per token — the model combines the capabilities of all Mistral’s models. It has the reasoning capabilities of Magistral, the multimodal understanding of Pixtral, and the agentic coding performance of Devstral. It also has a 256K context window that the company said works well for long-form conversations and analysis.
Rob May, co-founder and CEO of the small language model marketplace Neurometric, told VentureBeat that Mistral Small 4 stands out for its architectural flexibility. However, it joins a rising number of smaller models that he said risks adding more fragmentation to the market.
"From a technical perspective, yes, it can be competitive against other models,” May said. “The bigger issue is that it has to overcome market confusion. Mistral has to win the mindshare to get a shot at being part of that test set first. Only then can they show the technical capabilities of the model.”
Reasoning on demand
Small models still offer good options for enterprise builders looking to have the same LLM experience at a lower cost.
The model is built on a mixture-of-experts architecture, much like other Mistral models. It features 128 experts with four active each token, which Mistral says enables efficient scaling and specialization.
This allows Mistral Small 4 to respond faster, even to more reasoning-intensive outputs. It can also process and reason about text and images, allowing users to parse documents and graphs.
Mistral said the model features a new parameter it calls reasoning_effort, which would allow users to “dynamically adjust the model’s behavior.” Enterprises would be able to configure Small 4 to deliver fast, lightweight responses in the same style as Mistral Small 3.2, or make it wordier in the vein of Magistral, providing step-by-step reasoning for complex tasks, according to Mistral.
Mistral said Small 4 runs on fewer chips than comparable models, with a recommended setup of four Nvidia HGX H100s or H200s, or two Nvidia DGX B200s.
“Delivering advanced open-source AI models requires broad optimization. Through close collaboration with Nvidia, inference has been optimized for both open source vLLM and SGLang, ensuring efficient, high-throughput serving across deployment scenarios,” Mistral said.
Benchmark performances
According to Mistral's benchmarks, Small 4 performs close to the level of Mistral Medium 3.1 and Mistral Large 3, particularly in MMLU Pro.
Mistral said the instruction-following performance makes Small 4 suited for high-volume enterprise tasks such as document understanding.
While competitive with other small models from other companies, Small 4 still performs below other popular open-source models, especially in reasoning-intensive tasks. Qwen 3.5 122B and Qwen 3-next 80B outperform Small 4 on LiveCodeBench, as does Claude Haiku in instruct mode.

Mistral Small 4 was able to beat OpenAI’s GPT-OSS 120B in the LCR.
Mistral argues that Small 4 achieves these scores with “significantly shorter outputs” that translate to lower inference costs and latency than the other models. In instruct mode specifically, Small 4 produces the shortest outputs of any model tested — 2.1K characters vs. 14.2K for Claude Haiku and 23.6K for GPT-OSS 120B. In reasoning mode, outputs are much longer (18.7K), which is expected for that use case.
May said that while model choice depends on an organization’s goals, latency is one of the three pillars they should prioritize. “It depends on your goals and what you are optimizing your architecture to accomplish. Enterprises should prioritize these three pillars: reliability and structured output, latency to intelligence ratio, fine-tunability and privacy,” May said.
