Cohere Transcribe
#1 open-source ASR model — 5.42% WER, beats Whisper Large v3
Cohere Transcribe (cohere-transcribe-03-2026) is a 2B-parameter automatic speech recognition model released under Apache 2.0. It uses a Conformer-based encoder–decoder architecture with more than 90% of parameters in the encoder, keeping autoregressive decode compute minimal while delivering state-of-the-art accuracy. On the HuggingFace Open ASR Leaderboard, it achieves a 5.42% average word error rate — #1 overall, beating Whisper Large v3, ElevenLabs Scribe v2, and Qwen3-ASR-1.7B. It supports 14 languages including English, German, French, Arabic, Chinese, Japanese, and Korean, and runs up to 3x faster in real-time factor than comparable dedicated ASR models in its size range. The model is available for download on HuggingFace and through Cohere's commercial API. For enterprise deployments, it can be run fully on-premise under its permissive license — a significant differentiator from closed ASR services like Whisper or ElevenLabs Scribe.
Panel Reviews
The Builder
Developer Perspective
“A 2B-param model that beats everything on the ASR leaderboard, Apache 2.0 licensed, running 3x faster than comparable models — this is the new default for speech integration. I'm ripping out the Whisper pipeline this week and not looking back.”
The Skeptic
Reality Check
“SOTA leaderboard performance doesn't always translate to production resilience. Whisper has years of community testing, edge case handling, and tooling built around it. Cohere Transcribe is impressive on benchmarks, but run it against your actual data distribution — accents, noise, domain vocab — before committing to a migration.”
The Futurist
Big Picture
“The open-sourcing of a frontier ASR model by an enterprise AI company signals that speech recognition commoditization is complete. Cohere just made accurate transcription a commodity — the value moves entirely to what you build above the transcript layer. Voice interfaces just got dramatically cheaper to bootstrap.”
The Creator
Content & Design
“Finally a transcription model I can run locally at SOTA quality. For podcast editing, video captioning, and multilingual content workflows, this hits every requirement: accuracy, speed, multilingual support, and the ability to run completely offline without paying per-minute fees.”
Community Sentiment
“Bye Whisper, it was nice knowing you”
“5.42% WER is legitimately impressive for an open-source 2B model”
“#1 on Open ASR Leaderboard AND Apache 2.0 — the dream”