Back to reviews
HY-OmniWeaving

HY-OmniWeaving

Hunyuan video gen with a thinking mode that reasons before it renders

HY-OmniWeaving is Tencent Hunyuan's latest open-source video generation model, building on the HunyuanVideo-1.5 architecture. What sets it apart from other video gen models is a "thinking mode" — before generating any frames, a multimodal language model reasons over the user's intent, decomposes the prompt into scene structure, subject interactions, and timing, then passes that structured plan to the video decoder. The result is better multi-subject compositions and more intentional motion. The model supports text-to-video, image-to-video, keyframe interpolation, video editing, and multi-subject composition using up to four reference images. That last feature is particularly notable: you can feed it photos of four different characters or objects and generate videos that include all of them together, with consistent style and spatial relationships across frames. All weights and code are released as open source. For indie filmmakers, game studios, or any builder working on generative video pipelines, OmniWeaving offers capabilities that were previously locked behind proprietary APIs, now running on your own infra.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

The thinking mode is the right architecture for video gen — composing from structured intent rather than raw text means fewer garbage-in-garbage-out outputs. The multi-reference-image support finally makes it practical to generate content with consistent characters. Ship it.

The Skeptic

The Skeptic

Reality Check

Skip

The thinking mode adds latency that isn't broken down in the benchmarks, and Tencent's results are measured against their own prior models rather than Sora or Veo 3. Wait for community benchmarks on actual hardware before committing to it in a production pipeline.

The Futurist

The Futurist

Big Picture

Ship

Reasoning before rendering is the correct design pattern for controllable video generation. The industry has been brute-forcing this with bigger models; OmniWeaving's approach points toward video gen that's actually steerable, which matters far more than raw quality at this stage.

The Creator

The Creator

Content & Design

Ship

Four-reference-image multi-subject composition is a huge unlock for small studios creating character-consistent content. The thinking mode gives you more control over timing and spatial layout than anything else in the open-source space right now. This goes in my pipeline.

Community Sentiment

Overall830 mentions
70% positive21% neutral9% negative
Hacker News160 mentions
67%22%11%

Thinking mode before video generation

Reddit280 mentions
72%20%8%

4-reference-image multi-subject support

Twitter/X390 mentions
70%21%9%

Open source HunyuanVideo follow-up