Qwen3.6-35B-A3B
35B MoE model with only 3B active params that beats models 10× its inference size
Alibaba's Qwen team has released Qwen3.6-35B-A3B, a Mixture-of-Experts model that activates just 3 billion parameters per forward pass while drawing on 35 billion total. The result is frontier coding performance at the inference cost of a small model — it outperforms comparable dense models 10× its active size on agentic coding benchmarks. The native context window is 262K tokens, extensible to 1,010,000 tokens for long-document tasks. A standout feature is "thinking preservation" — the model retains reasoning context across turns in iterative development sessions, reducing the need to re-explain state in long agent loops. GGUF quantizations from Unsloth are already live for local use via Ollama, LM Studio, and llama.cpp, and the model lands well within the VRAM budget of a single 24 GB GPU at Q4_K_M. For developers, Qwen3.6-35B-A3B represents a genuinely efficient path to near-frontier coding capability without paying frontier API prices or needing server-grade hardware. The Apache 2.0 license means commercial use is unrestricted, making it a strong candidate for self-hosted coding agent backends.
Panel Reviews
The Builder
Developer Perspective
“If you're running a self-hosted coding agent and paying $X/month in API bills, this is your exit ramp. 3B active params means a single 4090 can serve it comfortably, and the 262K context actually handles real codebases. Ship it as your backend and tune from there.”
The Skeptic
Reality Check
“We've seen 'beats models 10× its size' claims before — benchmark cherry-picking is rampant. The thinking preservation feature sounds promising, but agentic loop reliability is something you discover in production, not on leaderboards. Run your own evals before committing an entire stack to this.”
The Futurist
Big Picture
“MoE is increasingly the dominant paradigm for the efficiency frontier, and this is one of the clearest demonstrations of why. 3B active params at 35B effective capacity is not a trick — it's an architecture win. The line between 'local model' and 'frontier model' is erasing faster than anyone predicted.”
The Creator
Content & Design
“1M token context on a local model is a game-changer for creative workflows — entire novel manuscripts, full design system docs, long-form scripts fit in a single window. The zero API cost means no throttling during high-creativity sprints. This earns a spot in the local toolkit.”
Community Sentiment
“3B active params inference cost vs. quality tradeoff”
“GGUF availability and VRAM requirements on consumer GPUs”
“Apache 2.0 license and commercial use implications”