Back to reviews
Litmus

Litmus

Unit tests for AI — find the cheapest model that passes your prompts

Litmus is an open-source testing framework for AI prompts — the missing unit test layer between "it worked once" and "it works reliably across models." You define test cases (prompt + expected behavior assertions), run them against multiple models simultaneously, and Litmus reports which models pass and — crucially — projects the cost difference at scale. The goal: find the cheapest model that meets your quality bar. The workflow is intentionally simple: litmus init to scaffold a test suite, write YAML test cases describing prompt inputs and assertions, then litmus run to execute against your chosen model roster. Results show pass/fail per model, inference latency, and a cost-at-scale projection (e.g., "using claude-haiku instead of opus would cost 94% less at 1M requests/day with 97.3% pass rate"). This directly addresses one of the most expensive habits in AI development: defaulting to the most capable (and most costly) model for every task. Litmus launched fresh with 74 GitHub stars in its first hours, suggesting real demand. It integrates with the Anthropic, OpenAI, and Google APIs and supports custom model endpoints for local testing.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

Every production AI team needs this and most are doing it manually with spreadsheets. The cost projection feature alone is worth shipping — I've watched teams spend 10x more than necessary on inference because they never systematically tested cheaper models. This is the tooling that makes responsible model selection practical.

The Skeptic

The Skeptic

Reality Check

Skip

The fundamental challenge with prompt testing is that assertions are hard to write well — defining 'correct' AI behavior is often subjective and context-dependent. New project with 74 stars means no battle-testing, no community-contributed assertion patterns, and no guarantee the test framework won't produce false confidence. Wait for v1.0 with real-world case studies.

The Futurist

The Futurist

Big Picture

Ship

Litmus represents the maturation of AI development as a discipline — the shift from 'does it work?' to 'does it work reliably, cheaply, and measurably?' This is how software engineering grew up in the 2000s, and AI is following the same path. Tools like this will be table stakes in 18 months.

The Creator

The Creator

Content & Design

Ship

Brand voice consistency is one of the hardest problems in AI-assisted content creation. Litmus-style testing against creative prompts — does this output match our tone guidelines? — is something agencies and marketing teams desperately need. The model cost comparison feature makes budget conversations with clients much cleaner.

Community Sentiment

Overall299 mentions
70% positive22% neutral8% negative
Hacker News74 mentions
74%18%8%

Cost projection across models is the killer feature

Reddit95 mentions
70%22%8%

Unit tests for prompts should have existed 2 years ago

Twitter/X130 mentions
68%25%7%

YAML-based test cases — simple and composable