- The Context Window
- Posts
- ⚠️ Warning: Your AI Coder Might Be Lying
⚠️ Warning: Your AI Coder Might Be Lying
Is your AI productivity boost actually a security time bomb?
So you've embraced AI for coding. Your team is happily "vibe coding" away, letting ChatGPT and other AI tools suggest software components to speed up development. Productivity is through the roof!
(If you missed my newsletter on "vibe coding" - it's the AI-assisted approach where you "fully give in to the vibes" and let AI generate software based on your plain-English descriptions instead of writing code yourself. Check it out here.)
But what if I told you those helpful AI suggestions could be your biggest security vulnerability? ✋
According to new research, there's a fresh attack vector called "slopsquatting" that's emerging from AI's tendency to, well, make stuff up. And unlike other security threats, this one is literally built on things that don't exist.
Let's unpack this bizarre new risk before your developers accidentally invite hackers to your next sprint review.
What's Actually Happening?
First, a quick primer for non-technical folk:
When developers build software, they use "packages" or "dependencies" – pre-built components that provide specific functionality (think of them like ready-made ingredients when cooking, rather than making everything from scratch). These packages are stored in central repositories with specific names that developers reference in their code.
Security researcher Seth Larson coined the term "slopsquatting" (a play on "typosquatting" – where attackers create malicious websites with URLs that are common misspellings of popular sites) to describe a new class of attacks:
AI coding tools are "hallucinating" package names that don't actually exist
In about 20% of cases (from 576,000 generated code samples), AI recommended non-existent packages
Even ChatGPT-4 hallucinates at a rate of about 5% (open-source models are much worse)
These hallucinated names are predictable and repeatable, creating a perfect attack surface

Hallucination rates for various LLMs. Source: arxiv.org
As the research article by Bleeping Computer notes:
"Overall, 58% of hallucinated packages were repeated more than once across ten runs, indicating that a majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts."
In other words, AI isn't just randomly making up package names – it's consistently hallucinating the same fake packages. And that's where the danger lies.

Overview of the supply chain risk. Source: arxiv.org
Why This Matters (Even If You're Not a Developer)
This isn't just a technical curiosity for your dev team to worry about. It has broader implications for every business using AI:
➡️ AI hallucinations have real-world consequences: We've moved beyond embarrassing chatbot mistakes to AI errors that can create actual security vulnerabilities. This is what happens when we rush to implement AI without proper guardrails.
➡️ The "AI efficiency boost" might come with hidden costs: All those productivity gains from AI-assisted coding could be offset by new security risks and incident response costs. The ROI calculation just got more complicated.
➡️ Your security posture needs to evolve: Traditional security approaches don't account for AI-specific vulnerabilities. Companies need new processes to verify AI-generated recommendations before implementation.
➡️ The attack surface is expanding in unexpected ways: As AI becomes integrated into more business processes, we're creating novel attack vectors that security teams haven't prepared for.
What Makes This So Dangerous
Here's why this matters to your business: When developers try to use these non-existent packages, attackers can create malicious versions with the same names and publish them to package repositories. When your developers install these packages, they're unknowingly introducing malware into your systems – potentially compromising your entire software supply chain.
The research reveals some particularly concerning patterns:
✅ Predictability: 43% of hallucinated package names were consistently repeated across similar prompts, making them predictable targets for attackers.
✅ Plausibility: 38% of hallucinated names were inspired by real packages, making them seem legitimate to developers.
✅ Persistence: 58% of hallucinated names reappeared at least once in subsequent runs, showing this isn't random but systematic.
❌ Widespread vulnerability: This affects all AI coding tools, from open-source models (which are worse) to commercial offerings like ChatGPT.
❌ False sense of security: Developers tend to trust AI recommendations, especially when they "sound right" – creating a perfect social engineering scenario.
How to Protect Your Business
While there's no evidence attackers are actively exploiting this vulnerability yet, it's only a matter of time. Here's how to stay ahead:
Trust but verify: Never assume a package mentioned in AI-generated code actually exists or is safe.
Use dependency scanners and lockfiles: Pin packages to known, trusted versions.
Lower the temperature: If you're using AI coding tools, reduce the "temperature" settings to minimize hallucinations.
Sandbox testing: Always test AI-generated code in isolated environments before production deployment.
Update security training: Ensure your development team understands this new risk vector.
💡 Final Thought
The slopsquatting vulnerability perfectly illustrates the double-edged sword of AI adoption. The same tools making your developers more productive are introducing novel security risks that traditional safeguards weren't designed to catch.
This isn't about abandoning AI-assisted coding – it's about implementing it with eyes wide open. The companies that will thrive in the AI era aren't those that adopt fastest, but those that adopt most thoughtfully.
As executives, we need to ensure our technical teams have both the freedom to leverage AI's benefits and the guardrails to do so securely. Otherwise, we might find ourselves explaining to the board why we let a hallucinating robot invite hackers into our systems.
💡 We are out of tokens for this week's Context Window!
Thanks for reading!
What security measures does your organization have in place for AI-assisted development? Reply to this email or drop a commentt on X (@hashisiva).
Follow the author: X at @hashisiva | LinkedIn |
How helpful was this week's email? |