pi-autoresearch
Autonomous code optimization loop — edit, benchmark, keep or revert
pi-autoresearch extends the pi terminal agent with an autonomous optimization loop: the agent writes a change, runs a benchmark, uses Median Absolute Deviation (MAD) to filter out statistical noise, and either commits or reverts — then loops. No human in the loop. The cycle repeats until a time limit or convergence criterion is met. The technique was popularized by Karpathy's autoresearch concept for ML training, but pi-autoresearch generalizes it to any benchmarkable target. Shopify's engineering team ran it against their Liquid template engine and reported 53% faster parse/render with 61% fewer allocations after an overnight run — changes their team had been unable to land manually in months. The MAD-based noise filtering is the key innovation: it prevents the agent from chasing benchmark noise and reverting valid improvements. The project has spawned an ecosystem: pi-autoresearch-studio adds a visual timeline of accepted/rejected edits, openclaw-autoresearch ports the concept to Claw Code, and autoloop generalizes it to any agent that supports a run/test interface. At 3,500 stars, it's one of the most-forked pi extensions.
Panel Reviews
The Builder
Developer Perspective
“I ran this against my GraphQL resolver layer over a weekend and got 31% latency reduction with zero manual intervention. The MAD filtering is the real innovation — previous attempts at autonomous optimization would thrash on noisy benchmarks. This one doesn't.”
The Skeptic
Reality Check
“Shopify's results are impressive, but they're also running this on a well-tested, stable codebase with comprehensive benchmarks. On a typical startup codebase with flaky tests and incomplete benchmarks, this will confidently optimize the wrong things. Benchmark quality gates the whole approach.”
The Futurist
Big Picture
“This is the earliest glimpse of AI that genuinely improves software without a human in the loop. When benchmarks exist, the agent is a better optimizer than humans — it's tireless, statistically rigorous, and immune to sunk-cost reasoning. Performance engineering as a discipline is about to change.”
The Creator
Content & Design
“The framing here is very backend/systems. I tried running it on a React component library to reduce render cycles and got a mess — the agent optimized for the benchmark at the expense of code readability. Fine for systems code, wrong tool for UI work.”
Community Sentiment
“Shopify 53% performance gain without human intervention”
“MAD noise filtering preventing benchmark-chasing”
“Karpathy autoresearch generalized beyond ML”