Over the next 18 to 24 months, as the EU AI Act comes into force, AI will stop being a conference topic and become a stress test of how your organisation thinks, decides, and communicates.

“The world’s first comprehensive AI rulebook” did not appear from nowhere. It grew out of GDPR, product safety rules, and unease about algorithms deciding who gets a job, a loan, or access to services. Its basic message is clear: when AI can affect rights or safety, “trust us” is not enough.

The Act sorts AI into buckets. Some uses are banned. High-risk systems come with strict obligations, including risk management, quality data, human oversight, documentation, and monitoring. Not values on a slide, but evidence you can put on the table.

That is where communication becomes the real test.

Boards already juggle CSRD, cyber, succession, and geopolitics. Chief communication officers are already drowning in frameworks. Now they face a blunt question: “Show us where AI is making consequential decisions in your organisation, and on what basis you judged those systems acceptable.”

That is not a legal memo question. It is a culture and communication question.

If you cannot say where AI sits in your HR tools, credit processes, or security stack, you are not waiting for guidance. You are behind. If you cannot surface the last three AI incidents or near misses and what changed as a result, you do not have governance. You have theatre.

So, what does “ready” look like in terms of communication?

  • A living map of where AI shapes decisions that affect livelihoods, safety, or access to services, written in plain language.
  • Named owners for those decisions, not just for the models.
  • Simple controls that actually run: data checks, human review, clear escalation paths.
  • Trade-offs written down in human terms: the error rate you accept, why you accept it, and who signed.
  • A board that asks “Show us the evidence” instead of “Show us the deck.

Underneath all of this sits one bigger change. If people in your organisation do not feel safe saying “We do not know yet, and here is how we will find out”, no regulation will make you good at governing opaque systems.

In Brussels, the EU AI Act is sold as a technical rulebook. Inside organisations, it is quickly becoming a test of how honestly you communicate about systems you do not fully control.

If you cannot answer three questions in clear language, you do not have governance; you have theatre. Where AI is really used. Who could be harmed if it fails? Why do you believe your story about control is true?

For communication leaders, readiness is not a new key message. It is a new habit: mapping AI-driven decisions, naming owners, documenting trade-offs in human language, and making it safe for people to say “We do not know yet, and here is how we will find out.”

When someone asks, “Why did you trust this system?”, the EU AI Act will care less about how fluent you sound on AI and far more about whether you can show your work instead of just your slides.


Leave a Reply