Back to reviews
agent-cache

agent-cache

One Redis/Valkey connection to cache your LLM calls, tool results, and agent sessions

@betterdb/agent-cache is a Node.js package that unifies three distinct caching concerns for AI agent stacks behind a single connection to Valkey or Redis: LLM response caching (semantic deduplication of API calls), tool result caching (memoization of function outputs), and session state caching (persistent agent memory across requests). Before this, teams typically maintained separate caching layers for each concern — often locked into different frameworks. The package ships framework adapters for LangChain, LangGraph, and Vercel AI SDK, with OpenTelemetry and Prometheus metrics built in. Version 0.2.0 adds Redis Cluster support; streaming response caching is on the roadmap. The design is intentionally agnostic: you can cache only LLM calls, only tool results, or all three, depending on your stack. The practical benefit is cost reduction: repeated LLM calls with identical or semantically similar prompts are a major source of avoidable API spend, especially in agent loops that retry failed tool calls. Adding semantic similarity matching for LLM cache hits (rather than exact key matching) is on the maintainer's roadmap, which would make the package significantly more powerful for production workloads.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

Managing three separate caching layers — one for LLM calls, one for tool outputs, one for session state — is a real tax on agent infrastructure maintainability. A unified abstraction with Valkey/Redis (which you likely already have) and OTel metrics baked in is an easy yes. The LangChain and Vercel AI SDK adapters mean minimal integration friction.

The Skeptic

The Skeptic

Reality Check

Skip

v0.2.0 is early software with sparse docs and a small adoption base. The LLM response cache uses exact key matching currently — semantic caching is just a roadmap item. Without semantic matching, you miss most real-world cache hits where prompts vary slightly. Come back when that's shipped and the production track record is established.

The Futurist

The Futurist

Big Picture

Ship

As agent loops run more frequently and API costs scale with usage, systematic caching becomes infrastructure, not optimization. The right abstraction at the right time — unified caching with existing Redis infrastructure — positions this to become a standard layer. The semantic cache feature, once shipped, is when this becomes genuinely important.

The Creator

The Creator

Content & Design

Skip

For creators and non-infrastructure developers, this is firmly in the 'your backend team installs this' category. The practical benefit is cheaper API bills — which matters — but there's nothing here to interact with directly. Useful but invisible.

Community Sentiment

Overall67 mentions
56% positive32% neutral12% negative
Hacker News7 mentions
55%35%10%

Lack of semantic similarity matching in current LLM cache

Reddit20 mentions
52%36%12%

Comparison with LangChain's built-in caching utilities

Twitter/X40 mentions
58%30%12%

Valkey support as a Redis OSS alternative