Bonsai-8B
First commercially usable 1-bit LLM: 8B capabilities in 1.15 GB of RAM
PrismML, a Caltech spinout, has shipped Bonsai-8B — the first 1-bit large language model that claims genuine benchmark parity with leading full-precision 8B instruct models while fitting entirely in 1.15 GB of RAM. It runs natively on Apple Silicon via MLX and on NVIDIA GPUs via llama.cpp without any quantization post-processing. The breakthrough here isn't just size — it's efficiency. PrismML reports approximately 4-5x better energy efficiency versus traditional 8B models, which matters enormously for mobile deployment, embedded systems, and cost-sensitive inference at scale. The Apache 2.0 license means no commercial restrictions, and the team has published the full training methodology alongside the weights. Previous 1-bit LLM efforts (BitNet, etc.) delivered underwhelming benchmark performance at practical scales. Bonsai-8B claims that gap has finally closed. If the benchmarks replicate independently, this could be the model that makes "AI on every device" a 2026 reality rather than a 2028 roadmap item.
Panel Reviews
The Builder
Developer Perspective
“1.15 GB for a capable 8B model is insane. This fits on a Raspberry Pi 5 with room to spare, and the energy efficiency numbers make it viable for battery-powered edge deployments. The MLX support is a nice touch for Apple Silicon devs. I'm testing this today.”
The Skeptic
Reality Check
“'Benchmark parity with leading 8B models' is a very careful claim — parity on which benchmarks, measured how? 1-bit models have consistently underperformed on reasoning tasks outside their training distribution. Wait for the community to stress-test it before building on it.”
The Futurist
Big Picture
“If 1-bit truly crosses the quality threshold, the implications for AI hardware design are enormous — existing silicon roadmaps assume FP16/BF16, not 1-bit. We're potentially looking at a new class of AI chips that are an order of magnitude cheaper and cooler to run.”
The Creator
Content & Design
“A model that runs on any MacBook — even the base M-chip model — with no cloud connectivity is a creative professional's dream for private workflows. Offline drafting, sensitive client work, rural creative retreats. The small footprint changes what's possible on creative hardware.”
Community Sentiment
“1.15 GB model size for full 8B capability”
“4-5x energy efficiency vs standard 8B”
“MLX + llama.cpp native support”