agent-cache
One Redis/Valkey connection to cache your LLM calls, tool results, and agent sessions
@betterdb/agent-cache is a Node.js package that unifies three distinct caching concerns for AI agent stacks behind a single connection to Valkey or Redis: LLM response caching (semantic deduplication of API calls), tool result caching (memoization of function outputs), and session state caching (persistent agent memory across requests). Before this, teams typically maintained separate caching layers for each concern — often locked into different frameworks. The package ships framework adapters for LangChain, LangGraph, and Vercel AI SDK, with OpenTelemetry and Prometheus metrics built in. Version 0.2.0 adds Redis Cluster support; streaming response caching is on the roadmap. The design is intentionally agnostic: you can cache only LLM calls, only tool results, or all three, depending on your stack. The practical benefit is cost reduction: repeated LLM calls with identical or semantically similar prompts are a major source of avoidable API spend, especially in agent loops that retry failed tool calls. Adding semantic similarity matching for LLM cache hits (rather than exact key matching) is on the maintainer's roadmap, which would make the package significantly more powerful for production workloads.
Panel Reviews
The Builder
Developer Perspective
“Managing three separate caching layers — one for LLM calls, one for tool outputs, one for session state — is a real tax on agent infrastructure maintainability. A unified abstraction with Valkey/Redis (which you likely already have) and OTel metrics baked in is an easy yes. The LangChain and Vercel AI SDK adapters mean minimal integration friction.”
The Skeptic
Reality Check
“v0.2.0 is early software with sparse docs and a small adoption base. The LLM response cache uses exact key matching currently — semantic caching is just a roadmap item. Without semantic matching, you miss most real-world cache hits where prompts vary slightly. Come back when that's shipped and the production track record is established.”
The Futurist
Big Picture
“As agent loops run more frequently and API costs scale with usage, systematic caching becomes infrastructure, not optimization. The right abstraction at the right time — unified caching with existing Redis infrastructure — positions this to become a standard layer. The semantic cache feature, once shipped, is when this becomes genuinely important.”
The Creator
Content & Design
“For creators and non-infrastructure developers, this is firmly in the 'your backend team installs this' category. The practical benefit is cheaper API bills — which matters — but there's nothing here to interact with directly. Useful but invisible.”
Community Sentiment
“Lack of semantic similarity matching in current LLM cache”
“Comparison with LangChain's built-in caching utilities”
“Valkey support as a Redis OSS alternative”