Open source has never been about a sprawling community of contributors. Not in the way we’ve imagined it, anyway. Most of the software we all depend on is maintained by a tiny core of people, often just one or two, doing unpaid work that companies use as essential infrastructure, as recently covered by Brookings research.
That mismatch worked, if uncomfortably, when contributing had friction. After all, you had to care enough to reproduce a bug, understand the codebase, and risk looking dumb in public. But AI agents are obliterating that friction (and have no problem with looking dumb). Even Mitchell Hashimoto, the founder of HashiCorp and open source royalty, is now considering closing external PRs to his open source projects completely. Not because he’s losing faith in open source, but because he’s drowning in “slop PRs” generated by large language models and their AI agent henchmen.
This is the “agent psychosis” that Flask creator Armin Ronacher laments. Ronacher describes a state where developers become addicted to the dopamine hit of agentic coding and spin up agents to run wild through their own projects and, eventually, through everyone else’s. The result is a massive degradation of quality. These pull requests are often vibe-slop: code that feels right because it was generated by a statistical model but lacks the context, the trade-offs, and the historical understanding that a human maintainer brings to the table.
It’s going to get worse.
As SemiAnalysis recently noted, we have moved past simple chat interfaces into the era of agentic tools that live in the terminal. Claude Code can research a codebase, execute commands, and submit pull requests autonomously. This is a massive productivity gain for a developer working on their own project and a nightmare for the maintainer of a popular repository. The barrier to producing a plausible patch has collapsed, but the barrier to responsibly merging it has not.
This leads me to wonder if we’ll end up in a world where the best open source projects become those that are hardest to contribute to.
The cost of contribution
Let’s look at the economics driving this pattern change. The problem is the brutal asymmetry of review economics. It takes a developer 60 seconds to prompt an agent to fix typos and optimize loops across a dozen files. But it takes a maintainer an hour to carefully review those changes, verify they do not break obscure edge cases, and ensure they align with the project’s long-term vision. When you multiply that by a hundred contributors all using their personal LLM assistants to help, you don’t get a better project. You get a maintainer who just walks away.
In the old days, a developer might find a bug, fix it, and submit a pull request as a way of saying thank you. It was a human transaction. Now that transaction has been automated, and the thank you has been replaced by a mountain of digital noise. The OCaml community recently faced a vivid example of this when maintainers rejected an AI-generated pull request containing more than 13,000 lines of code. They cited copyright concerns, lack of review resources, and the long-term maintenance burden. One maintainer warned that such low-effort submissions create a real risk of bringing the pull request system to a halt.
Even GitHub is feeling this at platform scale. As my InfoWorld colleague Anirban Ghoshal reported, GitHub is exploring tighter pull request controls and even UI-level deletion options because maintainers are overwhelmed by AI-generated submissions. If the host of the world’s largest code forge is exploring a kill switch for pull requests, we are no longer talking about a niche annoyance. We are talking about a structural shift in how open source gets made.
This shift is hitting small open source projects the hardest. Nolan Lawson recently explored this in a piece titled “The Fate of ‘Small’ Open Source.” Lawson is the author of blob-util, a library with millions of downloads that helps developers work with Blobs in JavaScript. For a decade, blob-util was a staple because it was easier to install the library than to write the utility functions yourself. But in the age of Claude and GPT-5, why would you take on a dependency? You can simply ask your AI to write a utility function, and it will spit out a perfectly serviceable snippet in milliseconds. Lawson’s point is that the era of the small, low-value utility library is over. AI has made them obsolete. If an LLM can generate the code on command, the incentive to maintain a dedicated library for it vanishes.
Build it, don’t borrow it
Something deeper is being lost here. These libraries were educational tools where developers learned how to solve problems by reading the work of others. When we replace those libraries with ephemeral, AI-generated snippets, we lose the teaching mentality that Lawson believes is the heart of open source. We are trading understanding for instant answers.
This leads to Ronacher’s other provocation from a year ago: the idea that we should just build it ourselves. He suggests that if pulling in a dependency means dealing with constant churn, the logical response is to retreat. He suggests a vibe shift toward fewer dependencies and more self-reliance. Use the AI to help you, in other words, but keep the code inside your own walls. This is a weird irony: AI may reduce demand for small libraries while simultaneously increasing the volume of low-quality contributions into the libraries that remain.
All of this prompts a question: If open source is not primarily powered by mass contribution, what does it mean when the contribution channel becomes hostile to maintainers?
It likely leads us to a state of bifurcation. On one side, we’ll have massive, enterprise-backed projects like Linux or Kubernetes. These are the cathedrals, the bourgeoisie, and they’re increasingly guarded by sophisticated gates. They have the resources to build their own AI-filtering tools and the organizational weight to ignore the noise. On the other side, we have more “provincial” open source projects—the proletariat, if you will. These are projects run by individuals or small cores who have simply stopped accepting contributions from the outside.
The irony is that AI was supposed to make open source more accessible, and it has. Sort of. But in lowering the barrier, it has also lowered the value. When everyone can contribute, nobody’s contribution is special. When code is a commodity produced by a machine, the only thing that remains scarce is the human judgment required to say no.
The future of open source
Open source isn’t dying, but the “open” part is being redefined. We’re moving away from the era of radical transparency, of “anyone can contribute,” and heading toward an era of radical curation. The future of open source, in short, may belong to the few, not the many. Yes, open source’s “community” was always a bit of a lie, but AI has finally made the lie unsustainable. We’re returning to a world where the only people who matter are the ones who actually write the code, not the ones who prompt a machine to do it for them. The era of the drive-by contributor is being replaced by an era of the verified human.
In this new world, the most successful open source projects will be the ones that are the most difficult to contribute to. They will demand a high level of human effort, human context, and human relationship. They will reject the slop loops and the agentic psychosis in favor of slow, deliberate, and deeply personal development. The bazaar was a fun idea while it lasted, but it couldn’t survive the arrival of the robots. The future of open source is smaller, quieter, and much more exclusive. That might be the only way it survives.
In sum, we don’t need more code; we need more care. Care for the humans who shepherd the communities and create code that will endure beyond a simple prompt.
Go to Source
Author: