Back
Linux Kernel Mailing List / Hacker NewsPolicyLinux Kernel Mailing List / Hacker News2026-04-11

Linux Kernel Maintainers Publish Official AI Patch Policy — Open Source's First Major AI Governance Moment

The Linux kernel project has published official maintainer guidance on AI-assisted patch submissions — the first formal AI governance policy from a major open-source project, setting a precedent that will ripple through the entire ecosystem.

Original source

The Linux kernel project has published formal guidance for contributors on the use of AI tools in patch development — the first major open-source infrastructure project to codify what AI assistance is and isn't acceptable in contributions. The guidance, surfaced on Hacker News with 348 points and 257 comments, represents a milestone moment in how critical open-source infrastructure handles the AI coding wave.

The core of the guidance distinguishes between AI-assisted and AI-generated patches. Using AI tools for research, documentation lookup, and code explanation is treated permissively. AI-generated code submitted without thorough human review, understanding, and testing falls under existing quality standards — which in practice is a high bar. Maintainers explicitly note that "I asked an LLM and it said this is correct" is not a substitute for the contributor understanding the code they're submitting.

The policy does not ban AI use. Instead, it reinforces the kernel's existing accountability model: the person submitting a patch is responsible for its correctness, regardless of how it was produced. This framing sidesteps the thorniest questions (IP ownership, watermarking, disclosure requirements) while preserving the project's ability to maintain quality.

Community reaction has been split largely along predictable lines. Experienced kernel contributors broadly support the clarification, noting it brings AI under the same rigor standards as any other tool. Newer contributors and observers from adjacent ecosystems worry it creates an implicit chilling effect on AI adoption in kernel work. Several comments noted the irony of the kernel — a project defined by Linus Torvalds' notoriously exacting code review standards — needing to formally address AI at all.

The broader significance extends well beyond Linux. Apache, CPython, and other major open-source foundations are watching closely. How the Linux kernel handles AI attribution and quality standards will effectively set the default template for the ecosystem — making this one of the more consequential policy documents of 2026 for open-source software.

Panel Takes

The Builder

The Builder

Developer Perspective

This is the right call and the right framing: AI is a tool, and the contributor is still responsible for understanding what they submit. What matters is whether the code is correct and maintainable, not how it was produced. Every major open-source project needs a version of this policy and Linux just wrote the template.

The Skeptic

The Skeptic

Reality Check

The policy sounds reasonable until you consider enforcement: there's no reliable way to detect AI-generated code, and maintainers are already overworked reviewing patches for correctness. This may create a two-tier system where AI-savvy contributors who review AI outputs carefully get the same scrutiny as those who blindly submit LLM output. The accountability model only works if reviewers can actually catch poor AI submissions.

The Futurist

The Futurist

Big Picture

The Linux kernel just wrote the governance template for the entire open-source ecosystem. Within 6 months, Apache, CPython, and the major JavaScript foundations will have similar policies, almost certainly modeled on this framing. Accountability-over-disclosure is the framework that lets AI adoption continue while preserving quality standards — a genuinely smart call.