The Electronic Frontier Foundation (EFF) Thursday changed its policies regarding AI-generated code to “explicitly require that contributors understand the code they submit to us and that comments and documentation be authored by a human.”
The EFF policy statement was vague about how it would determine compliance, but analysts and others watching the space speculate that spot checks are the most likely route.
The statement specifically said that the organization is not banning AI coding from its contributors, but it seemed to do so reluctantly, saying that such a ban is “against our general ethos” and that AI’s current popularity made such a ban problematic. “[AI tools] use has become so pervasive [that] a blanket ban is impractical to enforce,” EFF said, adding that the companies creating these AI tools are “speedrunning their profits over people. We are once again in ‘just trust us’ territory of Big Tech being obtuse about the power it wields.”
The spot check model is similar to the strategy of tax revenue agencies, where the fear of being audited makes more people compliant.
Cybersecurity consultant Brian Levine, executive director of FormerGov, said that the new approach is probably the best option for the EFF.
“EFF is trying to require one thing AI can’t provide: accountability. This might be one of the first real attempts to make vibe coding usable at scale,” he said. “If developers know they’ll be held responsible for the code they paste in, the quality bar should go up fast. Guardrails don’t kill innovation, they keep the whole ecosystem from drowning in AI‑generated sludge.”
He added, “Enforcement is the hard part. There’s no magic scanner that can reliably detect AI‑generated code and there may never be such a scanner. The only workable model is cultural: require contributors to explain their code, justify their choices, and demonstrate they understand what they’re submitting. You can’t always detect AI, but you can absolutely detect when someone doesn’t know what they shipped.”
EFF is ‘just relying on trust’
An EFF spokesperson, Jacob Hoffman-Andrews, EFF senior staff technologist, said his team was not focusing on ways to verify compliance, nor on ways to punish those who do not comply. “The number of contributors is small enough that we are just relying on trust,” Hoffman-Andrews said.
If the group finds someone who has violated the rule, it would explain the rules to the person and ask them to try to be compliant. “It’s a volunteer community with a culture and shared expectations,” he said. “We tell them, ‘This is how we expect you to behave.’”
Brian Jackson, a principal research director at Info-Tech Research Group, said that enterprises will likely enjoy the secondary benefit of policies such as the EFF’s, which would improve a lot of open source submissions.
Many enterprises don’t have to worry about whether a developer understands their code, as long as it passes an exhaustive list of tests, including functionality, cybersecurity, and compliance, he pointed out.
“At the enterprise level, there is real accountability, real productivity gains. Does this code exfiltrate data to an unwanted third party? Does the security test fail?” Jackson said. “They care about the quality requirements that are not being hit.”
Focus on the docs, not the code
The problem of low-quality code being used by enterprises and other businesses, often dubbed AI slop, is a growing concern.
Faizel Khan, lead engineer at LandingPoint, said the EFF decision to focus on the documentation and the explanations for the code, as opposed to the code itself, is the right one.
“Code can be validated with tests and tooling, but if the explanation is wrong or misleading, it creates a lasting maintenance debt because future developers will trust the docs,” Khan said. “That’s one of the easiest places for LLMs to sound confident and still be incorrect.”
Khan suggested some easy questions that submitters need to be forced to answer. “Give targeted review questions,” he said. “Why this approach? What edge cases did you consider? Why these tests? If the contributor can’t answer, don’t merge. Require a PR summary: What changed, why it changed, key risks, and what tests prove it works.”
Independent cybersecurity and risk advisor Steven Eric Fisher, former director of cybersecurity, risk, and compliance for Walmart, said that what EFF has cleverly done is focus not on the code as much as overall coding integrity.
“EFF’s policy is pushing that integrity work back on the submitter, versus loading OSS maintainers with that full burden and validation,” Fisher said, noting that current AI models are not very good with detailed documentation, comments, and articulated explanations. “So that deficiency works as a rate limiter, and somewhat of a validation of work threshold,” he explained. It may be effective right now, he added, but only until the tech catches up to produce detailed documentation, comments, and reasoning explanation and justification threads.
Consultant Ken Garnett, founder of Garnett Digital Strategies, agreed with Fisher, suggesting that the EFF employed what might be considered a Judo move.
Sidesteps detection problem
EFF “largely sidesteps the detection problem entirely and that’s precisely its strength. Rather than trying to identify AI-generated code after the fact, which is unreliable and increasingly impractical, they’ve done something more fundamental: they’ve redesigned the workflow itself,” Garnett said. “The accountability checkpoint has been moved upstream, before a reviewer ever touches the work.”
The review conversation itself acts as an enforcement mechanism, he explained. If a developer submits code they don’t understand, they’ll be exposed when a maintainer asks them to explain a design decision.
This approach delivers “disclosure plus trust, with selective scrutiny,” Garnett said, noting that the policy shifts the incentive structure upstream through the disclosure requirement, verifies human accountability independently through the human-authored documentation rule, and relies on spot checking for the rest.
Nik Kale, principal engineer at Cisco and member of the Coalition for Secure AI (CoSAI) and ACM’s AI Security (AISec) program committee, said that he liked the EFF’s new policy precisely because it didn’t make the obvious move and try to ban AI.
“If you submit code and can’t explain it when asked, that’s a policy violation regardless of whether AI was involved. That’s actually more enforceable than a detection-based approach because it doesn’t depend on identifying the tool. It depends on identifying whether the contributor can stand behind their work,” Kale said. “For enterprises watching this, the takeaway is straightforward. If you’re consuming open source, and every enterprise is, you should care deeply about whether the projects you depend on have contribution governance policies. And if you’re producing open source internally, you need one of your own. EFF’s approach, disclosure plus accountability, is a solid template.”
Go to Source
Author: