TL;DR:

What it is: The EU AI Act is the world's first comprehensive AI regulation. It entered into force in August 2024 and its major enforcement deadline hits in four months — August 2, 2026.

Why it's not just a tech company problem: The regulation applies to any organization that uses AI to make decisions — not just those that build it. HR screening tools, credit scoring, customer service bots, logistics optimization software — if AI touches your operations in ways that affect people, you're in scope.

The catch nobody's talking about: The Act has extraterritorial reach. If your company's AI systems produce outputs that affect EU residents — even if you're headquartered in Miami and never set foot in Brussels — the regulation applies to you.

STAT WORTH SHARING

Fines under the EU AI Act reach up to €35 million or 7% of global annual revenue — whichever is higher. For a $500M manufacturing company, that's $35 million on the table. The majority of rules come into force August 2, 2026.

If someone on your leadership team needs to see this, forward it their way.

Where Things Stand Right Now

The EU AI Act entered into force on August 1, 2024. It's been rolling out in phases ever since, and most of it has been quietly happening in the background while executives focused on other things.

Here's where things stand right now:

February 2025 — Already happened. The Act banned a set of AI practices outright. Manipulative AI systems designed to exploit psychological vulnerabilities, social scoring systems, real-time biometric surveillance in public spaces. If you have any of those in your stack, you've been non-compliant for over a year.

August 2025 — Already happened. Rules for general-purpose AI models (think: any company embedding ChatGPT, Claude, or Gemini into their products) became enforceable. Documentation requirements, transparency obligations, copyright compliance for training data.

August 2, 2026 — Four months away. This is the big one. The majority of the Act's rules come into force. High-risk AI systems — a category that's broader than most people expect — require conformity assessments, technical documentation, human oversight mechanisms, and EU database registration before they can operate.

August 2027 — Full scope applies to everyone, including AI embedded in regulated products like medical devices, automotive systems, and industrial machinery.

The reason this matters now and not in August: legal and compliance teams with experience in this space estimate a realistic compliance runway of 8 to 14 months when you factor in system inventory, gap analysis, technical modifications, conformity assessment scheduling, and documentation. That math puts the starting line somewhere around today.

"We're Not in Europe" Is Not a Defense

This is the part that surprises most executives in traditional industries.

The EU AI Act has explicit extraterritorial reach. It applies to any company — regardless of where it's headquartered — if its AI systems or their outputs affect people in the EU. Physical presence is irrelevant. The trigger is the connection to the EU, not your address.

In plain terms: if you use an AI hiring tool to screen applications from candidates in Europe, you're covered. If your AI-powered pricing model affects EU customers, you're covered. If your SaaS platform has EU users — even through a reseller you didn't know was selling there — you're likely covered. A US financial firm processing loan applications from EU nationals through an AI credit model is covered.

Legal experts at Morgan Lewis put it plainly: the Act will apply to companies that deploy AI systems used in the EU and provide outputs from those systems, even with no physical presence in Europe.

The regulation also works through the supply chain. If you're a US vendor whose AI tool is embedded in a product sold by an EU company, you share compliance responsibility with them. Your customers in Europe are going to start asking for documentation you may not have.

This is not theoretical exposure. These rules are already partially active, enforcement infrastructure is operational, and the major deadline is four months out.

The Four Risk Categories — A Quick Map

The Act organizes every AI system into one of four risk tiers. Where your tools land on that map determines what you're required to do. Here's the lay of the land — we'll be going deep on each category in upcoming issues.

Unacceptable Risk — Banned outright. These are AI systems the EU considers too dangerous to exist in the market at all. Social scoring systems that rate citizens based on behavior, AI designed to exploit psychological vulnerabilities, real-time biometric surveillance in public spaces. If anything in your stack touches these areas, you've been non-compliant since February 2025. We'll cover this category — and the enforcement cases already emerging — in a future issue.

High Risk — The heaviest compliance burden. This is where most traditional businesses get surprised. High-risk AI includes systems used in hiring and HR decisions, credit and financial assessments, education access, critical infrastructure management, healthcare, and law enforcement. The full list is broader than most executives expect — and we'll dedicate an entire issue to mapping it properly, with industry-by-industry examples.

Limited Risk — Transparency obligations. AI systems in this category don't require the heavy documentation and conformity assessments of high-risk tools, but they do require disclosure. If your customers or employees are interacting with a chatbot, they need to know it's a chatbot. If content is AI-generated, it needs to be labeled. Simple in principle, inconsistently practiced. More on this in a future issue.

Minimal Risk — No specific obligations. Spell checkers, spam filters, basic recommendation engines. The vast majority of everyday AI tools fall here. No compliance burden beyond general good practice.

The instinct in most organizations is to assume "we don't build AI, so this doesn't apply." That instinct is wrong. Deployers — companies that use AI systems rather than build them — have their own independent set of obligations under the Act. Your vendor's compliance doesn't cover yours.

Three Things to Do Before August

You're not going to solve EU AI Act compliance by reading a newsletter (sorry!). What you can do right now is avoid the most common mistake, which is doing nothing while assuming this is someone else's problem.

1. Do an AI inventory before you do anything else. You cannot manage what you can't see. Most organizations deploying AI at any meaningful scale don't have a complete picture of what's actually running. HR tools, credit scoring models, customer service automation, logistics software, vendor-supplied analytics — all of it needs to be mapped before you can assess your exposure. This is the step that consistently takes longer than expected. Start there.

2. Treat your vendors as your problem, not their problem. If an AI system your company deploys in EU markets doesn't comply, you share the liability. Your contracts with AI vendors need to be updated to require disclosure when AI is being used, documentation of compliance status, and notification if anything changes. Vendors who won't provide this are a risk you're currently carrying without knowing it.

3. Assign someone ownership before the deadline does it for you. EU AI Act compliance doesn't fit cleanly in legal, IT, operations, or HR. It touches all of them. The organizations that are furthest ahead right now are the ones that named a decision-maker — a person, not a committee — and gave them actual authority to make changes. If nobody owns it, the deadline will force the issue at the worst possible time.

Where is your organization right now on this? Even "we haven't started" or "we didn't know this applied to us" is useful — I'm mapping how traditional industries are actually approaching EU AI Act compliance, and I'm planning a series of pieces that go much deeper on each piece of this.

Hit reply with a one-liner!

Final Thoughts

This is the first piece in a series on the EU AI Act. Over the coming weeks and months, we're going to go deeper — on how the risk classification system actually works in practice, what compliance looks like for specific industries, how the regulation is already being enforced, and what US companies need to know before their next contract negotiation with a European customer.

The goal isn't to make you a compliance expert. It's to make sure you're not blindsided by something the legal team finds out about six weeks too late.

Know someone making AI decisions at a traditional company who should be reading this? Forward it their way.

We are out of tokens for this week's context window!

- Hashi

Follow Hashi:

Keep Reading