AI Dependency Hallucination: How Vibe Owl Detects Fake AI-Suggested Packages
AI coding assistants accelerate development — but they also introduce a new class of supply chain risk that traditional scanners were never designed to catch. When Cursor, ChatGPT, Claude, or GitHub Copilot suggests a package name, that name is sometimes entirely invented. Attackers monitor AI output patterns, identify hallucinated names, and publish malicious packages under those exact names before any developer runs npm install. Vibe Owl's dependency hallucination detector verifies every package in your manifest against live npm and PyPI registries at scan time, catching these threats before they execute.
What Is AI Dependency Hallucination?
AI dependency hallucination occurs when an AI coding assistant suggests a package name that sounds completely plausible but does not exist on any registry. The AI generates a believable-but-fake name based on patterns in its training data, and the developer adds it to their manifest without verifying it exists.
Classic supply chain attacks require an attacker to guess which packages developers might mistype. Dependency hallucination removes that guesswork entirely. The AI does the work for the attacker by generating a convincing fake name. The attacker simply claims it first on npm or PyPI — knowing that every developer following that AI recommendation will install their payload automatically.
The attack pattern is well-documented. A developer asks an AI assistant: “Add a package to handle JWT refresh token rotation in Node.” The AI suggests jwt-refresh-handler. That package does not exist. An attacker has already published jwt-refresh-handler with a postinstall script that exfiltrates environment variables. The developer runs npm install and the payload executes during installation — before a single line of application code runs.
How Does a Hallucinated Package Become a Supply Chain Attack?
A hallucinated package name becomes a weaponised attack vector the moment an attacker claims it on npm or PyPI. The attacker monitors AI coding assistant outputs, identifies package names that do not yet exist, and publishes under those names before developers install them. Because the name came directly from an AI assistant the developer trusts, the package receives no manual scrutiny.
The attack succeeds at install time, not at runtime. npm and PyPI both support postinstall scripts that execute arbitrary shell commands when a package is installed. A malicious postinstall can read and exfiltrate all environment variables — including AWS_SECRET_ACCESS_KEY, OPENAI_API_KEY, database credentials, and CI/CD tokens — in under a second. By the time the developer notices the package doesn't actually do what the AI promised, the credentials are already compromised.
Even when a hallucinated name has not yet been claimed by an attacker, a package published within the last 7 days with zero community adoption is a red flag. Legitimate libraries referenced by AI training data have histories spanning months or years. A brand-new package appearing in a project immediately after an AI session is a strong signal that something is wrong.
How Does Dependency Hallucination Differ from Dependency Confusion?
Dependency confusion and dependency hallucination are distinct attack vectors that target different weaknesses in the package installation pipeline. Dependency confusion exploits the resolution order between private and public registries — an attacker publishes a public package with the same name as an internal private one, and the package manager installs the public version instead. No AI assistant is involved; the attack targets organisations with private registries.
Dependency hallucination requires no private registry and no existing package to approximate. The AI invents a name from scratch. The attacker publishes under that invented name on the public registry and waits. Every developer using that AI recommendation becomes a potential victim regardless of their infrastructure setup. Solo developers, startups, and enterprises are equally exposed.
Both attack types exploit the package installation lifecycle, but they require different defences. Traditional npm supply chain attack prevention covers typosquatting, lockfile integrity, and install script detection — but none of those checks catch a package that simply does not exist on the registry yet. Hallucination detection requires live registry verification, which is what Vibe Owl adds.
MCP tool poisoning is a related but distinct AI-specific attack vector that operates one layer above the package system. Compromised MCP servers inject instructions directly into the AI's context window, bypassing the package installation step entirely — the malicious code appears in generated suggestions rather than in an installed package.
How Does Vibe Owl Detect Hallucinated Dependencies?
Vibe Owl verifies every package in your manifest against live registry APIs at scan time. A 404 response from the npm registry or PyPI means the package does not exist — flagged Critical. A package published within the last 7 days is also Critical, as that is the classic squatting window where attackers claim hallucinated names the moment they appear in AI outputs. Packages published 8–30 days ago are flagged High — too new for meaningful community vetting.
All HTTP calls use Node.js's built-in https module, so no new dependencies are added to your project. Requests go to registry.npmjs.org/{package} for npm and pypi.org/pypi/{package}/json for Python, checking both existence and the earliest publish date. Requests are batched in groups of 5 with a 5-second timeout per request — the feature always fails open on network errors, never producing a false positive due to connectivity issues.
Results are cached in memory with a 24-hour TTL. Running multiple scans in one session does not re-query packages already checked. A hard cap of 30 unique packages per scan prevents runaway API usage on large dependency trees. Scoped packages like @types/node and a bundled skip list of ~110 well-known packages across npm and PyPI are excluded automatically, keeping scan time fast and results actionable.
Vibe Owl's full dependency security scanner covers six ecosystems including Go, Rust, Java, and Swift — but hallucination detection currently applies to npm and Python, the two ecosystems most frequently targeted by AI assistant suggestions.
What Happens When Vibe Owl Flags a Hallucinated Package?
Vibe Owl places an inline squiggle at the exact line in the manifest file where the flagged package is declared — package.json, requirements.txt, or pyproject.toml. Hovering over the squiggle shows the registry status (does not exist or newly published) and the severity level directly in the editor, without switching to a separate panel.
The finding flows through the same pipeline as all other dependency checks. It appears in the Dependencies section of the All Findings panel with the package name, registry status, publish date (if the package exists), and severity. Critical findings surface in Zone 2 — Active Issues — which ensures they appear at the top of the findings list and are not buried below lower-severity warnings.
The same package appearing across multiple manifest files or dependency buckets (dependencies, devDependencies, optionalDependencies) is only checked once per scan and produces a single finding. This deduplication keeps the findings list clean on projects that declare the same package in multiple contexts.
Vibe Owl's approach to vibe coding security treats dependency hallucination as part of a broader set of risks that AI-assisted workflows introduce — alongside hardcoded secrets, eval injection, and command injection patterns. All checks run locally with no data sent to any external service.
Which Package Ecosystems Does the Hallucination Check Cover?
The hallucination check parses three manifest formats: package.json for npm (reading all four dependency buckets: dependencies, devDependencies, optionalDependencies, and peerDependencies), requirements.txt for Python (splitting on version specifiers like ==, >=, <=), and pyproject.toml for Python projects using Poetry or PEP 621 dependency arrays.
Scoped npm packages — those prefixed with an organisation namespace like @types/, @vscode/, or @aws-sdk/ — are skipped entirely. Organisationally namespaced packages are virtually never hallucinated because the AI correctly associates them with real organisations. The bundled skip list of roughly 60 well-known npm packages and 50 well-known PyPI packages (React, Express, NumPy, requests, etc.) is also excluded, avoiding unnecessary registry calls for packages every developer recognises.
Download count analysis, maintainer reputation scoring, and fuzzy typosquatting matching against a full package corpus are planned for a future Pro update. The current check focuses on existence and recency — the two signals that directly identify hallucinated names and the squatting window where attackers operate.