Chinese AI startup Z.ai, known for its powerful, open source GLM family of large language models (LLMs), has introduced GLM-5-Turbo, a new, proprietary variant of its open source GLM-5 model aimed at agent-driven workflows, with the company positioning it as a faster model tuned for OpenClaw-style tasks such as tool use, long-chain execution and persistent automation.

It's available now through Z.ai's application programming interface (API) on third-party provider OpenRouter with roughly a 202.8K-token context window, 131.1K max output, and listed pricing of $0.96 per million input tokens and $3.20 per million output tokens. That makes it about $0.04 cheaper per total input and output cost (at 1 million tokens) than its predecessor, according to our calculations.

Model

Input

Output

Total Cost

Source

Grok 4.1 Fast

$0.20

$0.50

$0.70

xAI

Gemini 3 Flash

$0.50

$3.00

$3.50

Google

Kimi-K2.5

$0.60

$3.00

$3.60

Moonshot

GLM-5-Turbo

$0.96

$3.20

$4.16

OpenRouter

GLM-5

$1.00

$3.20

$4.20

Z.ai

Claude Haiku 4.5

$1.00

$5.00

$6.00

Anthropic

Qwen3-Max

$1.20

$6.00

$7.20

Alibaba Cloud

Gemini 3 Pro

$2.00

$12.00

$14.00

Google

GPT-5.2

$1.75

$14.00

$15.75

OpenAI

GPT-5.4

$2.50

$15.00

$17.50

OpenAI

Claude Sonnet 4.5

$3.00

$15.00

$18.00

Anthropic

Claude Opus 4.6

$5.00

$25.00

$30.00

Anthropic

GPT-5.4 Pro

$30.00

$180.00

$210.00

OpenAI

Second, Z.ai is also adding the model to its GLM Coding subscription product, which is its packaged coding assistant service. That service has three tiers: Lite at $27 per quarter, Pro at $81 per quarter, and Max at $216 per quarter.

Z.ai’s March 15 rollout note says Pro subscribers get GLM-5-Turbo in March, while Lite subscribers get the base GLM-5 in March and must wait until April for GLM-5-Turbo. The company is also taking early-access applications for enterprises via a Google Form, which suggests some users may get access ahead of that schedule depending on capacity.

z.ai describes GLM-5-Turbo as designed for “fast inference” and “deeply optimized for real-world agent workflows involving long execution chains,” with improvements in complex instruction decomposition, tool use, scheduled and persistent execution, and stability across extended tasks.

The release offers developers a new option for building OpenClaw-style autonomous AI agents, and serves as a signal about where model vendors think enterprise demand is heading: away from chat interfaces and toward systems that can reliably execute multi-step work.

That is now where much of the competition is moving, as well, especially among vendors trying to win developers and enterprise teams building internal assistants, workflow orchestrators and coding agents.

Built for execution, not just conversation

Z.ai’s materials frame GLM-5-Turbo as a model for production-like agent behavior rather than static prompt-response use.

The pitch centers on reliability in practical task flows: better command following, stronger tool invocation, improved handling of scheduled and persistent tasks, and faster execution across longer logical chains. That positioning puts the model squarely in the market for agents that do more than answer questions.

It is aimed at systems that can gather information, call tools, break down instructions and keep working through complex task sequences with less supervision.

Rather than a straightforward successor to GLM-5, GLM-5-Turbo appears to be a more execution-focused variant: tuned for speed, tool use and long-chain agent stability, while the base GLM-5 remains Z.ai’s broader open-source flagship.

GLM-5-Turbo appears especially competitive in OpenClaw scenarios such as information search and gathering, office and daily tasks, data analysis, development and operations, and automation. Those are company-supplied materials, not independent validation, but they make the intended product positioning clear.

Background: z.ai and GLM-5 set the stage for Turbo

Founded in 2019 as a Tsinghua University spinoff in Beijing, Z.ai — formerly Zhipu AI — is now one of China’s best-known foundation model companies. The company remains headquartered in Beijing and is led by CEO Zhang Peng

Z.ai listed on the Hong Kong Stock Exchange on January 8, 2026, with shares priced at HK$116.20 and opening at HK$120, for a stated market capitalization of HK$52.83 billion, making it China’s largest independent large language model developer.

As of September 30, 2025 its models had reportedly been used by more than 12,000 enterprise customers, more than 80 million end-user devices and more than 45 million developers worldwide.

Z.ai’s last major release, GLM-5, which debuted in February 2026, gives useful context for what the company is now trying to do with GLM-5-Turbo.

GLM-5 is an open-source flagship model carrying an MIT license, posting a record-low hallucination score on the AA-Omniscience Index, and debuted a native “Agent Mode” that could turn prompts or source materials into ready-to-use .docx, .pdf and .xlsx files.

That earlier release was also framed as a major technical step up for the company. GLM-5 scaled to 744 billion parameters with 40 billion active per token in a mixture-of-experts architecture, used 28.5 trillion pretraining tokens, and relied on a new asynchronous reinforcement-learning infrastructure called “slime” to reduce training bottlenecks and support more complex agentic behavior.

In that light, GLM-5-Turbo looks less like a replacement for GLM-5 than a narrower commercial offshoot: a variant that keeps the long-context, agentic orientation of the flagship line but emphasizes speed, stability and execution in real-world agent chains.

Developer features and model packaging

On the technical side, Z.ai has been packaging the GLM-5 family with the kinds of capabilities developers now expect from serious agent-facing models, including long context handling, tools, reasoning support and structured integrations.

OpenRouter’s GLM-5-Turbo page lists support for tools, tool choice and response formatting, while also surfacing live performance data including average throughput and latency.

OpenRouter’s provider telemetry adds a useful deployment-level comparison between GLM-5 and GLM-5-Turbo, though the data is not perfectly apples-to-apples because GLM-5 appears across several providers while GLM-5-Turbo is shown only through Z.ai.

On throughput, GLM-5-Turbo averages 48 tokens per second on OpenRouter, which puts it below the fastest GLM-5 endpoints shown in the screenshots, including Fireworks at 70 tok/s and Friendli at 58 tok/s, but above Together’s 40 tok/s.

On raw first-token latency, GLM-5-Turbo is slower in the available data, posting 2.92 seconds versus 0.41 seconds for Friendli’s GLM-5 endpoint, 1.00 second for Parasail and 1.08 seconds for DeepInfra.

But the picture improves on end-to-end completion time: GLM-5-Turbo is shown at 8.16 seconds, faster than the GLM-5 endpoints, which range from 9.34 seconds on Fireworks to 11.23 seconds on DeepInfra.

The most notable operational advantage is in tool reliability. GLM-5-Turbo shows a 0.67% tool call error rate, materially lower than the GLM-5 providers shown, where error rates range from 2.33% to 6.41%.

For enterprise teams, that suggests a model that may not win on initial responsiveness in its current OpenRouter routing, but could still be better suited to longer agent runs where completion stability and lower tool failure matter more than the fastest first token.

Benchmarking and pricing

z.ai GLM-5 Turbo benchmarking chart

z.ai GLM-5 Turbo benchmarking chart. Credit: z.ai

A ZClawBench radar chart released by z.ai shows GLM-5-Turbo as especially competitive in OpenClaw scenarios such as information search and gathering, office and daily tasks, data analysis, development and operations, and automation.

Those are company-supplied benchmark visuals, not independent validation, but they do help explain how Z.ai wants the two models understood: GLM-5 as the broader coding and open flagship, and Turbo as the more targeted agent-execution variant.

A more nuanced licensing signal

One notable caveat is licensing. Z.ai says GLM-5-Turbo is currently closed-source, but it also says the model’s capabilities and findings will be folded into its next open-source model release. That is an important distinction. The company is not clearly promising to open-source GLM-5-Turbo itself.

Instead, it is saying that lessons, techniques and improvements from this release will inform a future open model. That makes the launch more nuanced than a clean break from openness.

Z.ai’s earlier GLM strategy leaned heavily on open releases and open-weight distribution, which helped it build visibility among developers.

China’s AI market may be rebalancing away from open source

GLM-5-Turbo’s licensing posture also lands in a wider Chinese market context that makes the launch more notable than a simple product update.

In recent weeks, reporting around Alibaba’s Qwen unit has raised fresh questions about how China’s leading AI labs will balance open releases with commercial pressure.

Earlier this month, Qwen division head Lin Junyang stepped down, becoming the third senior Qwen executive to leave in 2026, even though Alibaba’s Qwen family remains one of the most prolific open-model efforts anywhere, with more than 400 open-source models released since 2023 and more than 1 billion downloads.

Reuters then reported on March 16 that Alibaba CEO Eddie Wu would take direct control of a newly formed AI-focused business group consolidating Qwen and other units, amid scrutiny over strategy, profitability and the brutal price competition surrounding open-model offerings in China.

Even without overstating those developments, they help frame the broader question hanging over the sector: whether the economics of frontier AI are starting to push even historically open-leaning Chinese labs toward a more segmented strategy.

That does not mean Chinese labs are abandoning open source. But the pattern is becoming harder to ignore: open models help drive adoption, developer goodwill and ecosystem reach, while certain high-value variants aimed at enterprise agents, coding workflows and other commercially attractive use cases may increasingly arrive first as proprietary products.

In that sense, GLM-5-Turbo fits a larger possible shift in China’s AI market, one that looks increasingly similar to the playbook used by OpenAI, Anthropic and Google in the U.S.: openness as distribution, proprietary systems as business.

Seen in that light, GLM-5-Turbo looks like more than a speed-focused product update. It may be another sign that parts of China’s AI sector are moving toward the same hybrid model already common in the U.S.: openness as distribution, proprietary systems as business.

That would not mark the end of open-source AI from Chinese labs, but it could mean their most strategically important agent-focused offerings appear first behind closed access, even if some of their underlying advances later make their way into open releases.

For developers evaluating agent platforms, that makes GLM-5-Turbo both a product launch and a useful signal. Z.ai is still speaking the language of open models. But with this release, it is also showing that some of its most commercially relevant work may arrive first as proprietary infrastructure for enterprise-grade agent systems.