Back to reviews
Darkbloom

Darkbloom

Idle Macs become a decentralized AI inference network — 70% cheaper

Darkbloom is a peer-to-peer AI inference network built on idle Apple Silicon machines. Built by the team at Eigen Labs, it routes model inference requests across a mesh of MacBooks, Mac Minis, and Mac Studios whose owners opt in as operators. Prompts are end-to-end encrypted so operators cannot read user data, and operators keep 100% of the inference fees they earn. The network exposes an OpenAI-compatible API endpoint, so swapping from OpenAI or Anthropic requires a single line change. It supports popular open-weight models (Llama, Mistral, Qwen families) and claims up to 70% cost reduction versus centralized cloud inference — because the underlying hardware already exists in people's homes and offices. This is the most technically credible attempt yet at decentralized AI inference using consumer hardware. The core insight is that Apple Silicon chips have exceptional performance-per-watt and are already sitting idle in millions of homes. If the network can hit meaningful scale, it could meaningfully undercut AWS/GCP inference pricing while keeping prompts private — a rare combination.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

An OpenAI-compatible API that drops straight into my existing stack and costs 70% less? I'm already testing this. The end-to-end encryption story is compelling for privacy-sensitive workloads — finally an alternative to praying the big labs don't log your prompts.

The Skeptic

The Skeptic

Reality Check

Skip

Latency is the killer here — routing inference through a random person's Mac in Cleveland adds unpredictable delays that centralized providers don't have. And what happens when the operator's MacBook closes its lid mid-inference? The SLA story is nonexistent right now.

The Futurist

The Futurist

Big Picture

Ship

This is Napster for AI compute — and I mean that as a compliment. If Darkbloom cracks the reliability and routing problem, it could force AWS and GCP to dramatically cut inference prices or lose the long tail of developers entirely. The decentralized compute flywheel is finally legible.

The Creator

The Creator

Content & Design

Ship

I run diffusion models locally anyway but this gives me burst capacity when my Mac is under load. Knowing my creative prompts stay encrypted and aren't training someone else's model actually matters to me — most cloud providers are vague about this.

Community Sentiment

Overall740 mentions
73% positive17% neutral10% negative
Hacker News280 mentions
72%18%10%

Latency vs cost tradeoff debate

Reddit140 mentions
68%22%10%

Comparison to Bittensor and Akash

Twitter/X320 mentions
75%15%10%

End-to-end encryption as differentiator