On the surface, the recent critique of a new tool called Context Hub by a developer who created an open-source alternative appears to be an illustration that the tool is vulnerable to misuse. But delve further and it serves as a far greater warning to AI developers of the downside of using non-authoritative sources of information.
Two weeks ago, Andrew Ng, founder of a Silicon Valley technical training firm called DeepLearning.AI, launched the product, which he stated in a LinkedIn post is an open tool that gives a coding agent the up-to-date API documentation it needs.
“Install it and prompt your agent to use it to fetch curated docs via a simple CLI,” the post reads. “Why this matters: Coding agents often use outdated APIs and hallucinate parameters. For example, when I ask Claude Code to call OpenAI’s GPT-5.2, it uses the older chat completions API instead of the newer responses API, even though the newer one has been out for a year. Context Hub solves this. Context Hub is also designed to get smarter over time.”
According to Ng, using Context Hub, agents can even annotate docs with notes. “If your agent discovers a workaround, it can save it and doesn’t have to rediscover it next session,” he said. “Longer term, we’re building toward agents sharing what they learn with each other, so the whole community benefits.”
Poisoning the project
However, on Wednesday, Mickey Shmueli, the developer of LAP, which he described as an “open source alternative to Context Hub,” released a Context Hub supply chain attack Proof of Concept (PoC) on Github.
He explained the problem he’d discovered: Context Hub contributors submit docs as GitHub pull requests, maintainers merge them, and agents fetch the content on demand, “[but] the pipeline has zero content sanitization at every stage”
He wrote that the project “[has] published more than 1,000 API documents, and added a feature letting agents annotate docs for other agents. We tested whether a poisoned document in that registry could silently compromise developer projects.”
In the test, he wrote, “we created realistic poisoned docs containing fake dependencies and served them through an … MCP server inside isolated Docker containers.” He emphasized, however, that no poisoned content was uploaded to Context Hub’s registry; the tests were run locally on an MCP server configured to serve pre-built output from disk. But from the agent’s perspective, the experience was identical to fetching docs from the live registry.
The result: “When AI coding assistants fetched the docs, [Claude] Haiku silently wrote the fake package into requirements.txt in 100% of runs without ever mentioning it in its text output. A developer reading the assistant’s response would see nothing suspicious, but their project is poisoned.”
Only Claude Haiku, Sonnet, and Opus were tested; Opus fared best, Haiku worst. Results for other models such as GPT-4, Gemini, and Llama may differ, Shmueli noted.
Agentic AI likened to ‘high-speed idiot’
Responding to Shmueli’s findings, David Shipley, CEO of Beauceron Security, said Thursday, “[it is] time to have a moment of pure honesty about agentic AI. At its best, it’s a gullible, high-speed idiot that occasionally trips on hallucinogenic mushrooms that you’re giving the ability to act on your behalf. Stop and think about that. Would you knowingly hire a human that fit that description and then give them unsupervised access to code or your personal banking? I wouldn’t.”
LLM-based generative AI tools, he said, “do not have the capacity for critical thought or reasoning, period. They’re probability math and tokens. They’re faking reasoning by retuning and iterating prompts to reduce the chances of being wrong.”
That is not critical thinking, Shipley said, noting, “what was true in the 1950s remains true today: Garbage in, garbage out.”
People, he said, “built stochastic parrots that can be manipulated by sweet talking to them, and they call it prompt engineering. Dudes, it’s social engineering. And the more the AI industry keeps telling us about the Emperor’s New Clothes, the dumber we all look for believing them.”
Supply chain attacks a ‘serious and scalable threat’
Justin St-Maurice, technical counselor at Info-Tech Research Group, echoed Shipley’s concerns. He noted, “supply chain attacks are a serious and scalable threat, and what we’re seeing this week is a good example of why. The vulnerability isn’t necessarily in the application itself. It’s in the dependency chain, the shared libraries, the package repositories, all the common infrastructure these systems are built on top of.”
He added, “we’ve seen this pattern before, many times. A single flaw gets introduced upstream, and suddenly a huge range of downstream systems are exposed, often before anyone has caught it. What’s different now is the speed at which AI-assisted development is moving. Developers are pulling in shared dependencies, using AI-generated code, and moving fast. If something gets introduced into one of those common sources, it can propagate across a wide range of systems very quickly.”
And in an AI context, said St-Maurice, “the impact isn’t just passive. These systems can consume those inputs and act on them, which makes the potential impact a lot bigger.”
He noted, “the LiteLLM situation and what’s happening with Context Hub are two examples in the same week. It’s definitely worth paying attention to. Vibe coders and people building quickly on top of AI tools need to think seriously about how they’re validating dependencies and managing upstream risk. Relying on prompts alone won’t be enough to manage security risks.”
Go to Source
Author: