Back to reviews
Qwen3-Coder-Next

Qwen3-Coder-Next

80B MoE coding agent, 3B active params, Apache 2.0, runs on consumer GPU

Qwen3-Coder-Next is Alibaba Qwen team's open-weight coding agent model — 80B total parameters but only 3B active via a Mixture-of-Experts architecture, making it runnable on consumer hardware (quantized versions work on a $900 RX 7900 XTX GPU). It supports 256k context, integrates natively with Claude Code, Cline, and Cursor, and is Apache 2.0 licensed. The model was trained on 800,000 verifiable coding tasks mined from real GitHub PRs — not synthetic benchmarks — which contributes to its strong agentic coding performance. It scores 56.32% func-sec@1 on CWEval (security-focused coding eval), outperforming DeepSeek-V3.2, and is the top recommended local coding model per Latent.Space AINews as of April 2026. Available directly on Ollama. Qwen3-Coder-Next launched in February 2026 but is trending strongly on GitHub today, driven by fresh community benchmarks showing it holding its own against proprietary models on real-world coding tasks. For developers wanting a capable coding agent without API costs or data-sharing concerns, this is currently the best open-weights option.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

A coding agent that runs locally on a consumer GPU, integrates with Claude Code and Cursor, and outperforms DeepSeek-V3.2 on security-focused coding evals — this is exactly what the ecosystem needed. Training on real GitHub PRs rather than synthetic data shows in the output quality. If you're not using this for local-first coding workflows, you're paying API costs you don't need to.

The Skeptic

The Skeptic

Reality Check

Skip

56.32% on CWEval is good but not 'beats Claude' good — that framing in the community is overselling it. It's best-in-class for *open weights*, which is a narrower claim. And 'Alibaba open source' carries real enterprise risk: Apache 2.0 today doesn't mean the weights stay available or the license doesn't change. DeepSeek's previous license complications are a useful cautionary tale.

The Futurist

The Futurist

Big Picture

Ship

The fact that you can run a capable coding agent on $900 of consumer hardware — on an open-weights model with no API dependency — is a structural shift in who has access to AI-assisted development. Open-source coding agents at this capability level make serious software development accessible to the long tail of developers globally, not just those with budget for proprietary APIs.

The Creator

The Creator

Content & Design

Ship

For prototyping and building tools where I don't want my code leaving my machine, this is now my default. The Claude Code integration means I don't have to change my workflow — just swap the backend model. Apache 2.0 means I can actually build products on top of it without legal ambiguity. Strongly recommend.

Community Sentiment

Overall1,110 mentions
74% positive18% neutral8% negative
Hacker News210 mentions
70%22%8%

Consumer GPU performance on quantized variants

Reddit380 mentions
78%15%7%

Open-source vs. proprietary model comparison

Twitter/X520 mentions
72%20%8%

Apache 2.0 licensing and real GitHub PR training data