Enterprise use of open source AI coding is changing the ROI calculation

Coders are understandably complaining about AI coding problems, with the technology often delivering what’s become known as “AI slop,” but their concerns are signaling a more strategic issue about how enterprises calculate coding ROI.

The issues, according to IT analysts and consultants, go far beyond vastly faster production of code accompanied by the kinds of errors generated by AI agents that don’t truly understand the human implications of their code. 

Even if the resulting code functions properly, which it often doesn’t, it is introducing a wide range of corporate risks, ranging from legal (copyright, trademark, or patent infringements), cybersecurity (backdoors and inadvertently introduced malware) and accuracy (hallucinations, as well as models trained/fine-tuned on inaccurate data). Some of those issues are generated by poorly-worded prompts, and others occur because the model improperly interpreted proper prompts. 

This issue was explored this week in a discussion on the Bluesky social media site initiated by user Rémi Verschelde, a French developer living in Copenhagen, who said that he is the project manager and lead maintainer for @godotengine.org, as well as a co-founder of a gaming firm. 

AI slop impacts enterprises

“AI slop PRs [pull requests] are becoming increasingly draining and demoralizing for Godot maintainers,” he said. “We find ourselves having to second guess every PR from new contributors, multiple times per day.” Questions arise about whether the code was written at least in part by a human, and whether the ‘author’ understands the code they’re sending.

He asked, “did they test it? Are the test results made up? Is this code wrong because it was written by AI or is it an honest mistake from an inexperienced human contributor? What do you do when you ask a PR author if they used AI, because you’re suspicious, and they all reply ‘yes, I used it to write the PR description because I’m bad with English’?”  

These problems with AI coding are impacting executives throughout IT, legal, compliance, and cybersecurity. That is mostly because the accelerations that AI is driving are not merely putting out code thousands of times more quickly; the problems associated with AI and open source are increasing even more rapidly.

There are even reports that AI agents are fighting back against open source maintainers. 

These are especially vexing issues for enterprise executives, because many larger companies are trying to move more AI projects to open source to try to avoid problems such as data leaks and unauthorized uses associated with the major hyperscalers. 

The problem isn’t that the code is bad

Vaclav Vincalek, CTO at personalized web vendor Hiswai, said the problem with much vibe coding is not that the code looks bad. Ironically, the problem is that it looks quite good. 

“The biggest risk with AI-generated code isn’t that it’s garbage, it’s that it’s convincing. It compiles, it passes superficial review and it looks professional, but it may embed subtle logic errors, security flaws, or unmaintainable complexity,” Vincalek said. “AI slop isn’t just a quality issue. It’s a long-term ownership issue. Maintainers aren’t reviewing a patch [as much as they are] adopting a liability they may have to support for years.”

Another irony that Vincalek flagged is that some enterprises have been going to open source to avoid the same issues that AI in open source now delivers. 

“Some enterprises think open source is a refuge from hyperscaler AI risk, but AI-generated code is now flowing into open source itself. If you don’t have strong governance, you’re just shifting the risk upstream,” Vincalek said. “AI has lowered the cost of producing code to near zero, but the cost of reviewing and maintaining it hasn’t changed. That imbalance is crushing maintainers.”

Vincalek argued that the fix for this problem is to push back far more on those submitting the AI-generated code. 

“One of the simplest anti-slop mechanisms is forcing contributors to explain the intent behind the code. AI can generate syntax, but it can’t justify design decisions,” Vincalek said. “Projects need AI contribution policies the same way they need licensing policies. If someone can’t explain or maintain what they submit, it doesn’t belong in the codebase.”

One criticism of AI coding has been that the agents do not actually understand how humans function. For example, on a LinkedIn discussion forum, an AWS executive posted about an AI system that was creating a series of registration pages and extrapolated from other examples how those pages should look and function. But it drew the wrong conclusion. It learned from fields for username, email address, and phone number that if those characters in the same sequence already existed in the system, it needed to require a different input. It then applied that same logic to a field asking for age, and rejected an answer because “user with this age already exists.” 

Workflow changes needed

Jason Andersen, principal analyst at Moor Insights & Strategy, said the AI coding problem is not solely with code creation, but in how enterprises handle the process. 

“What AI really needs these days is a change of workflow [to deal with the] increasing amount of crap that you have to inspect. Where we are with AI right now is that one step in a long process happens very fast, but that doesn’t mean the other steps have caught up,” Andersen said. “A 30% increase in coding productivity delivers strains across the entire process. If it doubles, the system would break down. There are pieces of this that are starting to come together, but it’s going to take a lot longer than people think.”

Andersen, who described these coding agents as “robotic toddlers,” said that IT had been demanding accelerated coding, and then chose to embrace AI-accelerated open source. “But now that the Pandora’s Box has been opened,” they are unhappy with the results. 

Andersen compared this to a large marketing department that begs partners for as many sales leads as they can find and then later complains, “all of these leads suck.”

ROI calculations need revamping

Rock Lambros, CEO of security firm RockCyber, added that the ROI calculations need to be completely reconsidered.

“AI-made code is now almost free to produce, but it did nothing to reduce the cost of reviewing it,” he pointed out. “A contributor can generate a 500-line pull request in 90 seconds. Yet a maintainer still needs 2 hours to determine whether it’s sound. That asymmetry is what’s crushing open source teams right now.”

He noted that this isn’t just a code quality problem, it’s a supply chain security risk. “Nobody is paying attention to context rot, the gradual loss of coherence that happens over long AI generation sessions,” he said, noting that an agent might implement proper validation in one file and silently cease to do so in another. In fact, he said, research from UT San Antonio found that roughly 20% of package names in AI-generated code don’t even exist, and “attackers are already squatting [on] those names.”

Degradation of trust

Consultant Ken Garnett, founder of Garnett Digital Strategies, said he sees the problem as a degradation of the trust that open source has historically delivered.

“It’s what I’d call a verification collapse. Rémi Verschelde isn’t simply saying ‘the code is bad.’ He’s describing a system in which maintainers can no longer trust the signals they’ve always relied upon,” he said. “That’s a considerably deeper and more consequential problem than low-quality code alone, because it corrodes the trust infrastructure that open-source contribution has always depended on.”

Accumulating risk

Enterprises have scaled AI generation without redesigning the reviewing process to validate it, he noted. “The submission side of the workflow received, essentially, a ten-times speed multiplier. The human review side received nothing,” Garnett said. “The result is exactly what Godot is experiencing: a small, dedicated group of people drowning under a volume of work the system was never structured for them to handle. This is the entirely predictable consequence of accelerating one half of a workflow without touching the other.”

He added: “For enterprise IT leaders, the more uncomfortable question is whether they’ve built any accountability structure around AI-assisted code at all, or whether they’ve simply handed developers a faster instrument and assumed quality would follow. Consequently, what they’re often dealing with now isn’t an AI problem so much as a governance gap that AI has made impossible to ignore.”

Cybersecurity consultant Brian Levine, executive director of FormerGov, succinctly summed up the issue: “AI slop creates a false sense of velocity. You think you’re shipping faster, but you’re actually accumulating risk faster than your team can pay it down.”

Go to Source

Author: