Back to reviews
GLM-5.1

GLM-5.1

#1 on SWE-Bench Pro — Zhipu's open 754B MoE beats GPT-5 on coding

Z.ai (formerly Zhipu AI) has released GLM-5.1, a 754B-parameter Mixture-of-Experts model that's currently sitting at #1 on SWE-Bench Pro with a score of 58.4 — outperforming GPT-5.4 and Claude Opus 4.6 on long-horizon software engineering tasks. The model ships under MIT license with full weights on HuggingFace. GLM-5.1 was specifically designed for agentic software engineering workflows: multi-file reasoning, autonomous test-run-fix loops, and extended coding sessions that span hundreds of tool calls. It's not just a capability leap — at 754B active parameters via sparse MoE, it can be run more efficiently than a dense model of equivalent capability on a sufficiently provisioned cluster. The SWE-Bench Pro result is significant because that benchmark is harder to game than vanilla SWE-Bench Verified. It tests whether a model can resolve real GitHub issues with correct tests, proper diffs, and no regressions — the things that actually matter in production. For anyone running self-hosted coding agents or building on open models, GLM-5.1 just became the new baseline to beat.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

If the SWE-Bench Pro numbers hold up under independent replication, this is the first open model that can genuinely replace a proprietary API for serious agentic coding work. MIT license means you can fine-tune and deploy on your own infra. This is a big deal.

The Skeptic

The Skeptic

Reality Check

Skip

754B parameters is not something 99% of developers can run locally. You need a multi-GPU cluster or serious cloud spend. The benchmark numbers are from Z.ai's own evaluations, and Zhipu has a history of optimistic benchmarking. Wait for independent replications.

The Futurist

The Futurist

Big Picture

Ship

A Chinese lab shipping an MIT-licensed model that tops global coding benchmarks is a watershed moment for open-source AI. The geopolitical implications are real — this is the model that makes US export controls look strategically shortsighted.

The Creator

The Creator

Content & Design

Skip

Unless you're building coding tools or agent infrastructure, a 754B MoE model doesn't move the needle for creative applications. The energy and infra overhead for creative use cases doesn't pencil out versus smaller, cheaper models.

Community Sentiment

Overall1,310 mentions
68% positive20% neutral12% negative
Hacker News310 mentions
65%22%13%

SWE-Bench Pro #1 ranking vs GPT-5

Reddit420 mentions
70%18%12%

MIT license on 754B model

Twitter/X580 mentions
68%20%12%

Open weights beating proprietary frontier models