Back to reviews
LFM2.5-VL

LFM2.5-VL

450M vision-language model that runs in under 250ms on edge hardware

Liquid AI just shipped LFM2.5-VL, a 450M-parameter vision-language model engineered from the ground up for edge deployment. Unlike most VLMs that require a beefy GPU in the cloud, LFM2.5-VL targets devices like the Snapdragon 8 Elite, NVIDIA Jetson Orin, and AMD Ryzen AI — hitting sub-250ms latency on-device without any cloud round-trip. This model builds significantly on its predecessor with four new capabilities: bounding box prediction (81.28 on RefCOCO-M), multilingual support across 8 languages, function calling, and improved instruction following. Those aren't just benchmark checkboxes — bounding box prediction means you can run visual grounding and object detection pipelines on a phone or robot without any server involvement. Liquid AI is the MIT-spun startup behind Liquid Foundation Models (LFMs), a non-Transformer architecture that delivers competitive performance at a fraction of the memory footprint. LFM2.5-VL is available free on HuggingFace and through Liquid's LEAP inference platform. For builders targeting on-device AI — robotics, mobile, embedded — this is one of the most practical releases of the month.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

Sub-250ms on-device vision with function calling is the unlock for a huge class of apps that couldn't tolerate cloud latency — real-time AR overlays, offline field inspection, privacy-sensitive medical imaging. The bounding box support is icing; ship this.

The Skeptic

The Skeptic

Reality Check

Skip

450M parameters with 8-language support and benchmark-leading vision grounding sounds great until you try to fine-tune it for a domain-specific task. The LEAP platform is still invite-only and the open weights lack fine-tuning docs. Worth watching but not shipping to prod yet.

The Futurist

The Futurist

Big Picture

Ship

The race to run capable VLMs on-device is the precursor to AI-native hardware. Liquid's non-Transformer architecture is showing that efficiency gains don't require the same trade-offs as quantization. This is what AI hardware of 2028 will be built around.

The Creator

The Creator

Content & Design

Ship

On-device vision that can call functions means camera-native apps that don't phone home. Think real-time style transfer, offline image tagging, or AR creative tools that actually work on a plane. The creator tooling implications are underrated.

Community Sentiment

Overall640 mentions
72% positive20% neutral8% negative
Hacker News180 mentions
72%20%8%

Sub-250ms latency on Snapdragon 8 Elite

Reddit140 mentions
68%22%10%

Bounding box prediction on-device

Twitter/X320 mentions
74%18%8%

Liquid AI non-Transformer architecture