TL;DR:
The problem: 47% of enterprise AI users have made major business decisions based on hallucinated content. But hallucination isn't a tech problem waiting to be patched. It's a judgment problem—and your team has stopped exercising judgment.
The fix: AI Fluency. Not prompt skills. Not technical training. The executive capability to balance speed, responsibility, and value extraction—while building a culture where people actually challenge what AI tells them.
The litmus test: Ask your team one question: "What did you disagree with?" If they can't answer, they weren't thinking. They were transcribing.
INTRODUCTION
Your Team Has Stopped Questioning AI
47% of enterprise AI users have made major business decisions based on content that was completely fabricated. Not misinterpreted. Not slightly off. Fabricated by a confident algorithm that doesn't know the difference between fact and plausible-sounding fiction.
You might assume the fix is better training—teach people to use the tools properly. But a Microsoft and Carnegie Mellon study found the opposite: higher confidence in AI abilities correlates directly with less critical thinking. The more people trust it, the less they question it.
The hallucination rates aren't getting better anytime soon. Stanford researchers found that general-purpose LLMs hallucinate 69-88% of the time on legal queries. Even specialized legal AI tools—the ones marketed as "hallucination-free"—get it wrong 17-33% of the time.
So if you're waiting for the technology to fix itself, you'll be waiting a while.
LEADERSHIP CHALLENGE
This Is a Leadership Problem
I'm seeing a pattern across organizations: middle managers and knowledge workers have gotten remarkably good at getting AI to produce polished outputs. Impressive-sounding strategy recommendations. Clean analysis. Well-structured reports.
The problem is they've stopped pressure-testing any of it.
They don't challenge the AI's logic. They don't cross-reference against their own expertise. They don't ask: "Wait, does that actually make sense given what I know about this business?"
They ask. They accept. They present it to you.
And because it sounds good—because AI is nothing if not confident and articulate—it sails through.
This isn't a technology gap. It's a judgment gap. And judgment is a leadership responsibility.
You can't outsource it to IT. You can't wait for vendors to solve it. You can't hire a Chief AI Officer and call it done. The only fix is building the organizational capability to use AI well—which means building the habit of challenging it.
That's what AI Fluency is.
AI FLUENCY
What AI Fluency Actually Is
AI Fluency isn't about turning executives into data scientists. It's not about prompt engineering or knowing the difference between GPT-4 and Claude.
It's the capability to hold three things in balance:
1. Moving fast. AI enables speed. Organizations that can't deploy quickly get left behind.
2. Moving responsibly. The risks are real. 486 legal cases worldwide now involve AI hallucinations. Courts are escalating sanctions. Regulators are paying attention.
3. Extracting real value. Not every AI initiative is worth pursuing. Fluent leaders know how to evaluate use cases, build governance frameworks, and measure whether they're getting ROI—or just generating activity.
Most organizations optimize for one of these. Maybe two. The leaders who'll navigate the next five years are the ones who refuse to trade them off.
And underneath all three sits one foundational skill: knowing when to push back.
Speaking of AI Fluency, check out our partners HubSpot - they will show you how to use ChatGPT properly.
FROM OUR PARTNERS
Want to get the most out of ChatGPT?
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.
DO YOU DISAGREE?
One Question That Reveals Everything
When someone presents work that involved AI—a strategy recommendation, a market analysis, a proposal—ask them:
"What did you disagree with?"
Not "did you use AI?" That's the wrong question. The answer is obviously yes, and it should be yes.
The question is: what did the AI get wrong? What did you push back on? What did you change based on your expertise, your knowledge of this business, your judgment about what actually works here?
If they can't answer that, they weren't collaborating with AI. They were taking dictation.
The value of AI isn't that it does your thinking for you. It's that it accelerates your thinking—a starting point, a first draft, a framework to react against. But that only works if you actually react.
An AI output with zero human disagreement isn't a finished product. It's a rough draft that nobody edited.
BUILDING AI FLUENCY
Building AI Fluency in Your Organization
Two things need to happen.
First, create transparency around AI use. Your people should feel comfortable telling you when they've used AI. Not because you want to police it—because you need to know which outputs require the "what did you disagree with" conversation. Shadow AI is already a problem (78% of AI users bring unsanctioned tools to work). Driving it underground makes it worse.
Second, change what you reward. If you only celebrate speed and polish, you'll get speed and polish—regardless of whether the underlying thinking is sound. Start recognizing the people who challenge AI outputs, who catch errors, who add genuine expertise on top of automated first drafts.
Then go deeper:
Audit your own fluency. Do you understand the foundations of AI—not just generative AI, but the broader landscape? Do you know what responsible AI means in practice? Can you spot the difference between real value and innovation theater?
Assess your team. How many of your people could answer "what did you disagree with?" What's your governance framework—and do you actually enforce it?
Build the capability. AI Fluency isn't a one-time training. It's an ongoing organizational muscle. It needs frameworks, measurement, and executive sponsorship to stick.
If you want help building this—that's what we do at Digiform.
FINAL THOUGHTS
A tool is only as good as the judgment applied to it. Right now, we have a workforce that's learned to operate the AI - machinery without learning to question its outputs.
That's not an AI problem. That's a leadership problem. And the fix isn't better technology—it's better organizational capability.
The executives who get this right won't be the ones who adopted AI fastest. They'll be the ones who built the human judgment to use it well.
Start with one question: What did you disagree with?
How helpful was this week's email?
We are out of tokens for this week's context window!✋
Keep reading and learning and, LEAD the AI Revolution 💪
Hashi & The Context Window Team!
Follow Hashi:
X at @hashisiva | LinkedIn




