Apfel
Tap Apple's free on-device AI as a local OpenAI-compatible server
Every Apple Silicon Mac running macOS 26 Tahoe already has a ~3B parameter LLM installed — the same model powering Siri and Apple Intelligence. Apple just doesn't expose it to developers. Apfel is a MIT-licensed Swift CLI that unlocks it: run it as a pipe-friendly command, an interactive chat session, or a local HTTP server at localhost:11434 that's fully OpenAI SDK-compatible. Any existing codebase using the OpenAI client can point at it with a one-line config change and start using free, private, offline inference with zero API keys, zero cloud, and zero subscriptions. The feature set is surprisingly complete for a developer side project. Apfel supports MCP tool/function calling, streaming JSON output, file attachments, five context-trimming strategies for the 4,096-token window, and a companion ecosystem of apps (apfel-chat, apfel-clip, apfel-gui). With 4,138 GitHub stars in under three weeks — fueled by a 513-point Hacker News thread — it's clearly filling a real gap that Apple intentionally left. The constraints are real: macOS 26 Tahoe required, context window capped at ~3,000 words, and the model is not going to replace GPT-4 for complex reasoning. But as a privacy-preserving local LLM for scripts, quick queries, code reviews, and offline workflows, it's genuinely compelling. The underlying model is already sitting on tens of millions of machines. Apfel is just the key to the door Apple forgot to install.
Panel Reviews
The Builder
Developer Perspective
“If you have an M-series Mac running macOS 26, this is an immediate install — drop-in OpenAI compatibility means you can start running local inference against existing projects in literally 5 minutes. The MCP support and file attachment handling make it genuinely useful for scripted workflows, not just chat. The token limit stings, but for most dev automation tasks 3K words is plenty.”
The Skeptic
Reality Check
“Apple hasn't documented this API surface and could close it in any future OS update — you're building on sand. The 4,096-token context cap is genuinely painful in 2026 when frontier models offer 128K-1M+ tokens, and a 3B parameter model will simply fail on complex reasoning tasks where you'd actually want privacy. For casual queries the privacy angle is real; for serious workloads you'll hit the ceiling fast.”
The Futurist
Big Picture
“Apple shipped a capable on-device LLM to hundreds of millions of devices and then locked the door from developers. Apfel is the community's answer, and the 513-point HN reception suggests this is exactly what devs were waiting for. When the local AI model is free, private, and already installed, the adoption math changes — this is a preview of what happens when AI inference costs hit zero for common use cases.”
The Creator
Content & Design
“For copywriters, note-takers, and creative folks on Apple Silicon who want local AI assistance without a monthly subscription, this is a quiet win. It's not going to write your screenplay, but for draft refinement, summarizing notes, generating quick variations, or building personalized offline tools — having free, private inference on your laptop changes the calculus entirely.”
Community Sentiment
“Drop-in OpenAI compatibility and zero setup”
“Context window too small for real use cases”
“Free local AI already on your Mac”