Yggdrasil
Turns your CLAUDE.md rules from suggestions into enforced constraints
Yggdrasil addresses a persistent problem with AI coding agents: rules files like CLAUDE.md or .cursorrules are advisory, not enforceable. Agents ignore rules roughly 30% of the time, and violations surface only during code review — if at all. Yggdrasil transforms architectural constraints into an active verification loop that runs before code reaches review. Developers define rules in plain Markdown as 'aspects' — high-level requirements like 'all payment operations must emit audit events' or 'no direct database access from the UI layer.' These capture architectural and business logic constraints that traditional linters cannot express. When an agent generates code, it runs 'yg approve,' which sends the code and relevant rules to a reviewer LLM that checks compliance and returns specific violations. The agent fixes issues and re-verifies — all autonomously. Intelligent rule scoping delivers only the 3-5 rules relevant to each file rather than overwhelming the agent with a full ruleset. CI integration via hash comparison requires no LLM calls at the gate, keeping enforcement costs low. Yggdrasil supports Cursor, Claude Code, GitHub Copilot, Cline, and RooCode, with reviewer providers including Anthropic, OpenAI, Google, and Ollama.
Panel Reviews
The Builder
Developer Perspective
“CLAUDE.md files and .cursorrules are basically suggestions that agents ignore whenever they feel like it. Yggdrasil makes rules enforceable: the agent writes code, runs 'yg approve', gets specific violations back, fixes them, and re-verifies before the code ever reaches review. The intelligent scoping that shows agents only the 3-5 relevant rules per file instead of all 200 is the kind of practical detail that shows the builders understand how context windows actually work. CI integration via hash comparison (no LLM calls) means enforcement doesn't cost anything at the gate.”
The Skeptic
Reality Check
“The core pitch — 'rules files are just suggestions, we make them real' — is right. The implementation is another LLM-judges-LLM system, which means your architectural guardrails are only as reliable as your reviewer model's understanding of your codebase context. Writing 200 rules in plain Markdown sounds accessible until you realize that ambiguous natural language rules produce inconsistent enforcement, and debugging why 'yg approve' rejected code that looks fine requires reading LLM reasoning. Traditional static analysis and typed interfaces enforce constraints deterministically; this enforces them probabilistically.”
The Futurist
Big Picture
“As teams grow their CLAUDE.md files from 50 to 500 lines trying to wrangle agent behavior, Yggdrasil represents the next evolution: from instructional to contractual. The architecture prefigures a world where codebases have machine-enforced behavioral specifications at multiple levels — security, performance, style — that any agent (or human) must pass before merging. This is what software governance looks like when AI writes most of the code.”
The Creator
Content & Design
“For design systems work where 'all UI components must use tokens, never raw hex values' is a rule that gets violated constantly by AI agents, having an enforcement loop that catches violations before PR review would save hours of back-and-forth every week. The natural language rule definition means designers can contribute guardrails without learning a DSL.”
Community Sentiment
“Treating CLAUDE.md rules as enforceable contracts rather than suggestions is the right framing”
“LLM judging LLM rule violations is probabilistic enforcement, but better than nothing”
“Finally something that makes agent rules actually stick”