evalmonkey
Benchmark your AI agents under chaos — schema errors, latency spikes, 429s
evalmonkey is an open-source framework for testing how LLM agents degrade under adversarial conditions. You run your agent against 10 standard datasets (GSM8K, ARC, HellaSwag, etc.) pulled automatically from HuggingFace, then apply chaos profiles that introduce realistic failure modes: malformed JSON schemas, artificial latency spikes, 429 rate-limit errors, context-window overflow, and prompt injection payloads. The key output is a degradation delta — evalmonkey shows you exactly how much your agent's accuracy drops under each failure type versus clean inputs. A model that scores 78% on GSM8K normally but drops to 31% when it gets a 429 mid-chain tells you something crucial about its error-recovery behavior that standard benchmarks completely miss. It supports OpenAI, Anthropic (via Bedrock and direct), Azure, GCP, and any Ollama-hosted model. Corbell-AI published this with a clear thesis: agents break in production for infrastructure reasons, not model reasons — and no existing benchmark tests that. evalmonkey was created today (April 17, 2026) and is still at 3 stars, but the core idea is genuinely novel in the evals space.
Panel Reviews
The Builder
Developer Perspective
“Every engineer who's deployed an agent in production knows models fail catastrophically when the API starts rate-limiting mid-chain. evalmonkey is the first tool I've seen that actually lets you reproduce and measure that. The degradation delta report alone is worth the setup time.”
The Skeptic
Reality Check
“It's a brand new repo with 3 stars and no documentation beyond the README. The chaos profiles themselves are hardcoded — you can't simulate the specific failure patterns your infra produces. Useful concept, but wait for it to mature before relying on it for production decision-making.”
The Futurist
Big Picture
“Chaos engineering for AI agents is a missing layer in the entire reliability stack. As agents handle higher-stakes tasks, chaos benchmarking will move from 'interesting experiment' to 'required before deployment.' evalmonkey is establishing the vocabulary for that discipline right now.”
The Creator
Content & Design
“Too dev-focused for my immediate use, but if I'm running an agent that manages my publishing schedule, knowing it won't break when Anthropic throttles me at 2am is genuinely valuable. I'd want a managed version with a dashboard before adopting this.”
Community Sentiment
“Novelty of chaos engineering for LLM agents”
“Degradation delta methodology”
“Production reliability gap in current evals”