Back to reviews
lmscan

lmscan

Offline AI text detector that fingerprints which LLM actually wrote it

Most AI text detectors are cloud services with opaque models, significant false positive rates, and zero explanation for why they flagged content. lmscan is a zero-dependency Python package that runs entirely offline using 12 statistical linguistic features: perplexity scoring, burstiness analysis, vocabulary density, syntactic variety, and others. It's not just detection — it fingerprints the specific LLM family responsible, distinguishing between GPT-4, Claude, Gemini, Llama, and Mistral outputs based on their characteristic writing signatures. Every result is fully explainable, showing which features drove the classification. The design philosophy is explicitly anti-black-box: every classification comes with a feature-by-feature breakdown, making it suitable for applications where you need to explain the result to a human (academic integrity, content moderation, employment screening). The CLI interface drops into CI/CD pipelines for automated content checking, and the Python API integrates into document processing workflows. No API key, no network call, no vendor lock-in. Very early project — minimal stars and community traction as of this writing. The statistical approach trades accuracy for explainability, which means sufficiently paraphrased AI text will evade detection just as it does on competing services. But for a free, fully offline, explainable baseline for AI text analysis, it occupies a niche that no established tool does cleanly. Worth monitoring for teams that need local, auditable AI detection without vendor dependency.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

The zero-dependency, fully offline angle makes this immediately viable for enterprise environments where you can't send content to a third-party API for compliance reasons. The LLM fingerprinting feature is genuinely novel — I haven't seen another tool that tries to attribute text to specific model families. Early days, but the CI/CD integration and explainable output make it worth piloting for document pipelines where you need auditable AI detection.

The Skeptic

The Skeptic

Reality Check

Skip

Statistical AI text detection is a fundamentally broken approach — anyone who rewrites AI output a couple of times will evade it, and false positive rates on certain human writing styles (non-native English speakers, highly technical prose) can be significant. The LLM fingerprinting claim sounds exciting but needs rigorous benchmark testing before I'd trust it in a real content moderation or academic integrity context. Ship it when there's an accuracy paper.

The Futurist

The Futurist

Big Picture

Ship

As AI-generated content saturates every channel, the tools for detecting and attributing it become infrastructure, not just features. lmscan's offline, explainable approach points toward the right architecture: detection capability should be embeddable and auditable, not locked behind API calls. The specific LLM attribution angle — figuring out which model family produced text — will become increasingly important for provenance tracking and regulatory compliance.

The Creator

The Creator

Content & Design

Skip

If you're a creator who worries about AI-generated content flooding your niche or competitors using AI to impersonate your style, this is theoretically relevant. But the accuracy question is real — statistical detection won't catch polished AI content, and false positives could flag your own work. Interesting concept that needs a lot more development before it's trustworthy for real editorial decisions.

Community Sentiment

Overall85 mentions
52% positive32% neutral16% negative
Hacker News25 mentions
55%30%15%

Explainable output and LLM fingerprinting

Reddit20 mentions
50%30%20%

Statistical detection limitations

Twitter/X40 mentions
52%33%15%

Offline AI detector without vendor dependency