Large language models hallucinating non-existent developer packages could fuel supply chain attacks
Large Language Models (LLMs) have a serious “package hallucination” problem that could lead to a wave of maliciously-coded packages in the supply chain, researchers have discovered in one of the largest and most in-depth ever studies to investigate the problem. It’s so bad, in fact, that across 30 different tests, the researchers found that 440,445…