Edgee Codex Compressor
Lossless token compression that extends your Claude Code context by ~30%
Edgee Codex Compressor is an open-source Rust-based AI gateway that sits between your coding agent (Claude Code, OpenAI Codex, or any LLM client) and the API. It losslessly compresses tool call results, file reads, shell outputs, and other large context payloads before they hit Anthropic or OpenAI's token counters — extending your effective context window by an average of 26-35% without changing any outputs. The core insight is that most of what fills context windows in coding agents is repetitive: boilerplate file content, repeated error messages, verbose JSON responses, and tool output that could be summarized without information loss. Edgee intercepts these at the gateway level, applies a combination of deduplication, semantic compression, and caching, then decompresses before passing to the model so the LLM sees full fidelity content. For developers regularly hitting Claude Code Pro session limits, this is a practical workaround. No code changes, no API key swapping — just point your coding client at the local Edgee proxy. The full source is on GitHub under the Edgee organization (the same team that builds Edgee, the analytics and CDN privacy gateway).
Panel Reviews
The Builder
Developer Perspective
“Any tool that gives me 30% more context for free is worth running. A local Rust proxy adds minimal latency and the implementation is auditable — I can verify it's actually lossless. If the compression holds up on larger codebases this is an immediate install for me.”
The Skeptic
Reality Check
“'Lossless' semantic compression is a contradiction in terms — any summarization involves decisions about what's important. Running all your API traffic through a third-party proxy also raises data handling questions. The GitHub repo is young and I'd want a full audit before trusting it with proprietary code.”
The Futurist
Big Picture
“Token efficiency layers between clients and APIs are an inevitable part of the AI infrastructure stack. Edgee is building in the right place — the gateway, not the model or the client. As context windows grow, intelligent compression becomes more valuable, not less.”
The Creator
Content & Design
“Unless you're running coding agents, the token compression use case doesn't map to creative workflows where you want the model to see the full richness of your prompts. For most content work, the complexity of running a local proxy outweighs the marginal gains.”
Community Sentiment
“Claude Code context limit workaround”
“Token compression lossless claims”
“26-35% effective context extension”