Back
Sleeping Robots / Hacker NewsOpinionSleeping Robots / Hacker News2026-04-16

"Stop Using Ollama" Goes Viral — The OSS Local AI Tooling Reckoning

A blog post arguing against Ollama hit the top of Hacker News, sparking intense debate over VC-backed local AI tools, OSS licensing, and whether llama.cpp should be used directly instead of abstractions built on top of it.

Original source

A post titled "Stop Using Ollama" by an author at Sleeping Robots hit #1 on Hacker News today, generating over 400 comments and a genuine community reckoning about the local AI tooling ecosystem.

The core argument: Ollama's original README credited its model serving as "just llama.cpp" — then the team quietly forked llama.cpp, introduced incompatibilities, added closed-source components post-VC funding, and pivoted toward cloud features. The author contends this is a bait-and-switch of the open-source community's trust.

The recommended alternatives are direct: use llama.cpp's built-in OpenAI-compatible server (it's been there since 2023), or use LM Studio, Jan, koboldcpp, or ramalama for a GUI experience without the Ollama baggage. Each is either fully open source or at least transparent about what's proprietary.

The HN comments are notably split. Many long-time llama.cpp users agreed — they'd already switched away from Ollama after it diverged from the upstream. Others defended Ollama, arguing the abstraction and UX improvements justified the tradeoffs and that "just use llama.cpp directly" is gatekeeping.

What's clear is that the local AI tooling space is hitting the same VC-vs-OSS tensions that affected Redis, Elasticsearch, and MongoDB. The question of who "owns" the community around open-weight models — and who profits from the infrastructure layer above them — is becoming politically charged.

Panel Takes

The Builder

The Builder

Developer Perspective

I switched from Ollama to llama.cpp's native server 6 months ago and haven't looked back. The startup time is faster, the GPU utilization is better, and I'm not waiting for Ollama to expose llama.cpp features I need. The post is directionally correct even if the tone is sharp.

The Skeptic

The Skeptic

Reality Check

Ollama onboarded hundreds of thousands of people to local AI who never would have touched llama.cpp's command line. Some VC funding and closed-source server features don't erase that. The perfect OSS alternative that also has Ollama's UX doesn't exist yet.

The Futurist

The Futurist

Big Picture

The post is a symptom of a larger dynamic: every layer of local AI tooling is being VC-funded and slowly closed. LM providers, inference servers, fine-tuning tools — all following the same playbook. We'll see this fight play out repeatedly until a genuinely community-owned standard wins.