Warning: AI’s GDPR Moment Has Begun
If you’re telling yourself advertising isn’t high risk under the EU AI Act, you’re missing the part that matters.
I still remember the week before GDPR went live.
It was chaos.
Not strategic chaos. Not visionary transformation chaos. The bad kind.
The kind where Janet in legal sent a panicked email at 9.47pm asking for a full data inventory by morning.
The kind where Tayo in IT suddenly found itself in meetings it did not know existed.
The kind where Jess rewrote privacy notices overnight while pretending everything was fine on client calls.
I watched grown adults managing eight figure budgets crumble because they had no idea where half the data in their own systems came from.
Databases shrank overnight. CRM lists collapsed. Performance dipped. Everyone blamed marketing. Marketing blamed legal. Legal blamed vendors. Vendors blamed interpretation.
People did not sleep.
Not because GDPR was impossible.
Because no one took it seriously until it was real.
The stress was not caused by the law.
It was caused by the lack of preparation.
Now swap data for AI.
That is where we are.
The EU AI Act Is Already Here
The EU AI Act entered into force in August 2024.
Bans are already live.
Oversight structures are forming.
Guidance is evolving.
August 2026 remains the enforcement inflection point for the bulk of operational obligations.
And yet the dominant narrative in advertising is comfort.
“Advertising isn’t high risk.”
Technically true.
Strategically irrelevant.
Because what caught people out with GDPR was not whether they were a data broker.
It was whether they understood their own systems.
The AI Act is doing the same thing, just pointed at decision making instead of data storage.
The Parts Advertising Cannot Ignore
Advertising mostly sits in limited or minimal risk tiers.
That is misleading.
The real exposure comes from three areas:
Prohibited practices
Transparency obligations
Downstream obligations from general purpose AI models
Let’s break that down.
1. The Bans Are Already Live
The prohibited practices regime is in force.
AI systems that:
Use subliminal or manipulative techniques that materially distort behaviour and cause harm
Exploit vulnerabilities linked to age, disability or economic situation
are banned.
If you work in:
Performance marketing
Gaming or gambling
High cost credit
In app purchases targeting young audiences
Aggressive behavioural optimisation
You should be paying attention.
Because the line between “smart personalisation” and “manipulative distortion” will not be drawn by your growth team.
It will be drawn by regulators.
And they will not use your terminology.
2. Transparency Is Coming for Creative
From August 2026, transparency obligations hit most operational use cases.
That includes:
Informing users when they are interacting with AI systems
Labelling AI generated or manipulated content
Disclosing deepfake style content that resembles real people or events
This is not a cosmetic compliance tweak.
It affects:
Tool selection
Vendor due diligence
Creative approvals
Platform uploads
Workflow design
QA processes
Legal sign off
If your team is using generative tools that cannot provide provenance metadata or watermarking, that becomes your compliance issue.
Not just a production shortcut.
3. The Digital Omnibus Is Not a Rollback
The Commission has proposed adjustments through the Digital Omnibus package.
Yes, there are discussions around:
Simplifying certain obligations
Centralising oversight under the EU AI Office
Extending some timelines for systems already on the market
But simplification is not immunity.
This is the EU adjusting the runway, not cancelling the landing.
The core structure remains.
Enforcement powers remain.
The direction of travel remains.
A short extension does not equal long term safety.
It equals the temptation to delay again.
And we have seen how that ends.
4. The Role Problem Nobody Wants to Own
The AI Act distinguishes between:
Providers
Deployers
Distributors
Importers
In AdTech supply chains, this will be messy.
Adtech vendors offering AI driven optimisation tools may be providers.
Brands and agencies using them are deployers.
Resellers may sit somewhere in between.
When something goes wrong, everyone will suddenly care about that classification.
If you cannot explain your role, your obligations, and your controls, you are back in a 2018 GDPR meeting trying to reverse engineer your own stack.
What To Do Now
Not a workshop.
Not a slide deck.
Actual operational work.
Agencies
Map every AI tool used across creative, media, performance and CRM
Identify where outputs touch EU audiences or regulated sectors
Review behavioural optimisation tactics for manipulation or vulnerability exposure
Define how AI generated creative and chatbot interactions will be labelled
Build a formal escalation process for sensitive AI use cases
Stop assuming vendors carry all liability
Agencies sit in the middle.
That means exposure from both sides.
Vendors and AdTech Platforms
Clarify whether you are acting as provider, deployer or both
Prepare documentation explaining system functionality and risk controls
Ensure outputs can support AI marking and disclosure requirements
Review optimisation features for proximity to prohibited practices
Update contracts to reflect AI Act obligations clearly
Vendor opacity will not survive this regime.
Brands and Advertisers
Build a formal inventory of AI systems used in marketing
Pressure test recruitment, credit, eligibility and sensitive targeting use cases
Design a public disclosure strategy for chatbots, avatars and synthetic creative
Tighten procurement requirements for agencies and vendors
Train marketing leadership on AI risk exposure
Brands will absorb reputational damage first.
Even if the tech came from somewhere else.
Why This Feels Familiar
GDPR forced the industry to understand where data flows.
The AI Act forces the industry to understand how automated decisions are made and how synthetic content is presented.
It is not anti innovation.
It is pro accountability.
The panic last time did not come from the regulation.
It came from the realisation that people had been operating complex systems without governance.
“Advertising is not high risk, but your workflows are about to become high accountability.”
“The AI Act is not waiting for your vendor roadmap.”
“A short extension is not extra time. It is extra denial.”
The next time someone says, “We’re fine, advertising isn’t high risk,” remember the look on people’s faces in 2018 when they realised they were not fine at all.
The law did not cause the panic.
The lack of preparation did.
k, thanks, bye



