Het Financieel Dagblad published a remarkable story last week about NN Group. The base salary of the executive board rose by 9%. The bonus of the CEO and CFO is now determined to a greater extent by progress on digital strategy — simplifying IT systems and the deployment of AI. A justified choice. Because 42% of all insurance policies are already taken out online. 77% of all customer interaction takes place digitally. NN counts 236 active AI agents performing tasks independently.
And then came the sentence that stayed with me.
Not one of the seven supervisory board members is described as a 'tech and cyber expert.'
Four of the eight board members have, according to the annual report, "sufficient knowledge to make an informed decision." The rest have undertaken supplementary training. But in the Supervisory Board (RvC) — the body that provides oversight, safeguards strategy and approves executive decisions — deep technological expertise is absent.
That is not a minor detail. That is a risk.
Overseeing something you do not understand
Imagine: an insurer becomes dependent on a new financial instrument for 40% of its revenue, yet no one on the Supervisory Board has ever read a balance sheet. Unthinkable. And yet this is precisely what is happening in Dutch boardrooms right now — only with AI and cybersecurity.
The executive bonus will soon depend on how effectively they deploy AI. The Supervisory Board must assess whether those objectives have been met. But how do you evaluate the quality of an AI strategy if you do not understand how a large language model works? How do you assess the risks of 236 autonomous AI agents if you are unfamiliar with the principles of AI governance?
Overseeing something you do not understand is not oversight. It is relying on good fortune.
Odido: the wake-up call we cannot ignore
As we write this, Odido is still processing the consequences of a large-scale data breach. Millions of customer records leaked. Reputational damage that will take years to repair. Financial damage that wipes out, in a single blow, whatever was saved by underinvesting in cybersecurity.
Odido is not an exception. It is the norm.
In 2024, Dutch companies and organisations were hit by hundreds of serious cyber incidents. Ransomware attacks brought hospitals to a standstill. Phishing campaigns took down government systems. And every time, the same conclusion emerges from investigation reports: the risks were known, but urgency was lacking at board level.
The question is not whether your organisation will become a target. The question is when — and whether you are ready.
A Supervisory Board without cyber expertise is an organisation without a crisis radar. You cannot intervene in time on a risk you do not recognise.
AI governance: the new financial risk management
Twenty years ago, the financial crisis forced us to make financial expertise mandatory in supervisory bodies. The Audit Committee became the norm. CFOs were scrutinised more rigorously. Risk management secured a permanent place at the board table.
We are now on the eve of a comparable shift — only this time for technology.
AI is no longer an IT project. It is business infrastructure. It touches customer communication, claims handling, underwriting policy, fraud detection and pricing. At NN already at scale. In the wider market we see the same movement: from traditional funnels to AI-driven distribution, from call centres to chat and WhatsApp with AI bots as primary customer channels.
And with that shift come new risks that differ from what supervisory board members have traditionally known:
Algorithmic bias. AI models can systematically disadvantage certain customer groups — not through malicious intent, but through the data on which they were trained. Who in the Supervisory Board monitors this?
Model drift. An AI model that performs well today can be making unreliable decisions six months from now without anyone noticing. Who asks the right questions of the executive board?
Autonomous decision-making. NN has 236 AI agents performing tasks independently. Who is accountable when an autonomous agent makes an error with significant financial or legal consequences?
Data leaks through AI integrations. Every API or MCP connection, every external tool called by an AI agent, is a potential leak. The attack surface grows exponentially.
These are not hypothetical risks. These are operational risks of today.
The gap between ambition and governance
A dangerous gap is opening up in Dutch boardrooms.
On one side: executive ambitions focused on digital transformation, AI adoption and data-driven operations. Bonuses linked to these goals. Investment programmes worth hundreds of millions of euros — such as NN's €450 million for AI and automation.
On the other side: supervisory boards composed on the basis of criteria suited to the business environment of ten years ago. Financial expertise. Legal knowledge. Sector experience. Network. All valuable. But incomplete for the world of today.
The result? Board decisions on AI strategy, cloud migration and cybersecurity are approved by people who cannot grasp the deeper implications. Not because they are not intelligent. But because the knowledge is not there.
That is not a reproach. That is a systemic failure.
More than upskilling: structural embedding
"We are undertaking supplementary training," said NN in the annual report about supervisory board members without a formal technology background.
Upskilling is commendable. But it is not enough.
There is a difference between a supervisory board member who has had one day of AI training and a supervisory board member who has ten years of experience building, scaling and securing technology platforms. The first provides awareness. The second provides judgement.
What we need is not more courses. We need the structural embedding of technological expertise in supervisory bodies. Concretely:
1. Mandatory technology expertise on the Supervisory Board. Just as an Audit Committee requires financial expertise, a Technology & AI Committee must be established with at least one member with a proven background in AI, data or cybersecurity.
2. An independent CISO voice at board level. Cybersecurity must not live only with the CTO. The risk profile of a data breach or an AI incident is significant enough to justify direct reporting lines to the Supervisory Board.
3. Periodic AI audits as a supervisory instrument. Just as financial audits are the standard, AI audits — testing model behaviour, data quality, bias and safety — must become a fixed element of the governance cycle.
4. Diversity in technological backgrounds. Not only Big Tech veterans. Also ethicists who understand AI governance. Data privacy experts. People who see the human being behind the technology.
The future is already at the door
In the market we see it every day. The shift from telephone to chat is already under way. WhatsApp is becoming a full-service channel. AI assistants answer questions that a year ago would still have gone to a human agent. And we are only at the beginning.
The next wave is distribution via AI. Not via a website. Not via an app. But via the AI assistant the customer already uses daily. Insurance taken out in a conversation with an AI, via an MCP server that compares and advises on products in real time. That is not science fiction. That is the infrastructure being built right now.
And in that world, the quality of your AI governance is not an HR question. It is a question of survival.
Organisations that oversee their AI strategy with supervisory board members who understand the technology will course-correct faster, make fewer mistakes, and build more trust with customers, regulators and investors.
Organisations that do not will walk blindly into an increasingly complex world — with all the risks that entails.
The Call
Dear chairmen of supervisory boards, dear shareholders, dear nomination committees of the Netherlands:
The next time you fill a vacancy on your Supervisory Board, ask yourself one question:
Does this candidate have the knowledge to provide oversight (opent in nieuw venster) of an organisation whose future depends on AI, data and digital security?
If the answer is no, you are filling a seat. Not appointing a supervisor.
The time of upskilling as an alibi is over. AI and cybersecurity are no longer peripheral subjects. They are the core of the business model. Oversight must grow alongside them — or the costs will be paid by the customers, employees and shareholders who deserved better.
NN has the ambition to lead in digital transformation. That deserves respect.
