TLDR The AI training obligation is laid down in Article 4 of the EU AI Act: every organization that uses AI — from ChatGPT for emails to complex software — is legally required to train employees. You don't need to set up a formal training program, but you must be able to demonstrate that you've done something. Enforcement starts in August 2026. If you haven't done anything yet, you're already behind.
Your colleague uses ChatGPT. And you are responsible.
Somewhere in your organization, someone is typing a prompt into ChatGPT right now. A summary of a meeting report. A first draft of a customer email. A response to a complaint. Perhaps twenty people do this daily, without your knowledge and without having thought about what could go wrong.
That's not just a risk to the quality of your communications. It has now become a legal problem.
Since February 2, 2025, Article 4 of the EU AI Act has been in effect. This article requires every organization that uses AI to ensure that employees have sufficient knowledge to use AI responsibly. Not just the engineers. Not just the people building AI systems. Everyone who works with it.
Yes, that includes the employee who only uses ChatGPT to write an email.
Large or small, it doesn't matter
A misconception I regularly hear: "That's for big tech companies, right? We're just an SME." That's not correct.
The AI Act applies to both providers and users of AI systems. This includes not only internal employees, but also people who use AI on behalf of the organization. The size of your company or your revenue plays no role in the obligation to train employees. If you use AI, the law applies.
There is a nuance. In November 2025, the European Commission proposed to lighten the Article 4 requirements for smaller companies, through the so-called Digital Omnibus initiative. But that is still a proposal. Providers and deployers must still comply with the original obligation from Article 4 of the AI Act. Until something changes, the current law applies.
Even with seemingly simple AI use — such as using ChatGPT for writing texts or translating — the obligation applies. Employees must understand the risks, such as unreliable or incorrect output, lack of source references, and potential bias.
What exactly do you need to do?
This is where it gets interesting. The law does not prescribe a fixed format. You don't need to hire an expensive external training firm, and there is no mandatory certificate. What the law does require is that you demonstrably do something.
The AI Act identifies four components that an AI literacy program should minimally contain: a general understanding of AI within the organization, insight into the organization's role, awareness of risk levels, and alignment with the profile of the users.
That last point matters. The plan must match who the employee is and which system they use. The receptionist who occasionally has ChatGPT rewrite an email needs different training than the analyst working with an AI tool that supports credit decisions. One generic course for everyone is not sufficient if the risks vary significantly by role.
In practical terms, consider:
- A short awareness session on how AI works and where it can go wrong
- An online course through a platform like Coursera, LinkedIn Learning, or a tool from the Dutch Data Protection Authority
- An internal document with AI usage guidelines, including a brief discussion with the team
- An overview of which AI tools are used and by whom
The key point is that you can demonstrate it. Who completed which training? When? Document it. It doesn't need to be complicated. A spreadsheet with names, dates, and completed modules is already a start.
Key deadlines you need to know
The EU AI Act officially entered into force on August 1, 2024. The first obligations, including AI literacy, took effect on February 2, 2025.
This means you are already behind if you haven't done anything yet.
From August 3, 2026, national supervisory authorities will oversee compliance with Article 4. Fines or sanctions are possible for non-compliance, especially if damage occurs that is partly the result of insufficient knowledge among users.
For the more stringent category — high-risk AI systems such as tools used in selection processes, credit assessment, or comparable decisions — additional requirements also take effect in August 2026. But those additional requirements do not exempt you from the basic obligation that is already running.
I often compare this to the GDPR in 2018. Back then, everyone initially thought: does this apply to us? And it turned out that it applied to everyone, that enforcement really began, and that organizations that had taken it seriously slept much more soundly. The same story is playing out now.
What you document now protects you later
In addition to the training itself, the law requires documentation. Not as a bureaucratic exercise, but because in the event of an audit or incident, you must be able to show that you handle AI consciously in your organization.
Think about:
- An inventory of which AI systems you use and for what purpose
- Per system: who works with it and what risk is attached
- The training plan you have established, however informal
- A record of who was trained and when
This covers policy, tasks and roles, a training plan, attendance records or certificates, and periodic reviews. And maintaining audit trails.
None of this needs to be in one large document. But it does need to be somewhere.
Where do you start?
The Dutch Data Protection Authority has published a guide: Getting started with AI literacy. That's a good starting point, free and practical. Beyond that, there are plenty of online platforms with courses that can be completed in an afternoon.
But start with the basics. Line up three things: which AI tools are used in your organization, by whom, and for what tasks? That overview immediately shows you where the risks lie and what training is needed.
The good news is: you don't need to reinvent the wheel. A brief session with your team, an online course that you register, and a simple list of who completed what. That's already a solid foundation.
Don't wait any longer
It always happens the same way. The law is there, the deadlines are known, but most organizations wait until the quarter before enforcement begins. By then, it's busy, it's expensive, it's chaotic.
You're already using AI. Almost everyone is at this point, consciously or not. Make sure you can also demonstrate that you use it responsibly. Not for a regulator, but because it's smart. Employees who understand what AI can and cannot do make better decisions. They don't blindly trust a summary that might be half-fabricated. They know when to double-check something.
That is ultimately what this law is intended for. And it's simply good policy.
