Back to reviews
Rubber Duck

Rubber Duck

A second AI model reviews your Copilot agent's plan before it ships code

Rubber Duck is a new capability in the GitHub Copilot CLI agent workflow that introduces cross-model code review. When Copilot's primary agent generates a plan or implementation, Rubber Duck routes that output to a second AI model from a different provider family for an independent review — catching architectural mistakes, edge cases, and logic errors before any code is committed. The name is a nod to rubber duck debugging, but the mechanism is more like adversarial collaboration: the reviewing model has no stake in the primary model's plan and no context about why certain decisions were made. It approaches the output fresh, which is precisely where different models excel — a model that didn't generate a plan is much better at finding its flaws than the model that created it. This is a meaningful shift in how AI-assisted development works. Most AI coding tools use a single model throughout the entire workflow. Rubber Duck introduces model diversity as a quality-control mechanism, acknowledging that no single AI has perfect judgment and that cross-checking is standard practice in human code review for good reason. It's available now as part of GitHub Copilot CLI.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

The insight here is sharp: models are worst at finding their own mistakes. Using a second model as an independent reviewer is the right call, and it mirrors how good human code review actually works. I want to know which model pairs GitHub is using — the quality of the adversarial check will depend heavily on choosing models with genuinely different failure modes.

The Skeptic

The Skeptic

Reality Check

Skip

This doubles your inference cost for every agentic operation, and GitHub hasn't published latency numbers. If the cross-model review adds 10-15 seconds to every agent step, it'll be disabled by most developers within a week. Catch rates vs. latency overhead is the key tradeoff and it hasn't been benchmarked publicly yet.

The Futurist

The Futurist

Big Picture

Ship

Model ensembling for quality control is the obvious next step in agentic AI workflows, and GitHub shipping it in Copilot normalizes the pattern. In two years, single-model agent pipelines will feel as naive as shipping code without CI. Rubber Duck is the CI layer for agentic code generation.

The Creator

The Creator

Content & Design

Ship

Honestly, I'd love this for writing. Having a second AI with a completely different perspective review a draft before it goes out catches things the primary model is blind to — that's just good editing practice. The name 'Rubber Duck' is perfectly chosen; it captures the spirit of the feature better than any technical description could.

Community Sentiment

Overall665 mentions
73% positive19% neutral8% negative
Hacker News145 mentions
74%18%8%

Latency and cost overhead of cross-model review

Reddit200 mentions
70%22%8%

Which model pairs GitHub is using for the adversarial review

Twitter/X320 mentions
75%18%7%

Model diversity as a QA mechanism for AI agents