How boards are losing control of AI, even if they think they are not
Mar 22, 2026
There is a quiet but profound shift taking place inside organisations. Boards believe they are in control of artificial intelligence. In most cases, they are not.
AI is already embedded across businesses, often invisibly. It is being used in marketing, HR, finance, customer operations and product development, frequently without any coherent governance model. What appears to be innovation at the edge is, in reality, a gradual loss of control at the centre. By the time most Boards recognise what has happened, the organisation has already moved beyond their line of sight.
The problem begins with a false assumption. Many Boards still treat AI as they once treated IT, as something that can be procured, implemented and controlled centrally. That model no longer applies. AI is not a system, it is a capability. It seeps into decisions, workflows and behaviours across the organisation, often without formal approval and almost always without full visibility.
Inside a typical organisation today, marketing teams are using generative AI tools trained on customer data, often without clarity on how that data is stored or reused. HR teams are deploying AI screening tools that may introduce bias into hiring decisions without any Board level scrutiny. Finance teams are experimenting with AI driven forecasting models that influence investment decisions before they are formally validated. Customer service teams are increasingly relying on AI interfaces that are effectively making decisions on behalf of the brand. None of these developments require Board approval, and most do not even reach the executive committee.
This is not theoretical. A recent IBM study found that more than 40 percent of employees are already using AI tools at work, with a significant proportion doing so without formal approval. Samsung experienced this first hand when engineers uploaded sensitive source code into ChatGPT, forcing the company to restrict usage. Amazon famously abandoned an AI recruitment tool after discovering it was systematically biased against women. In financial services, regulators have already raised concerns about the speed at which AI driven models are being deployed relative to the rigour of their validation.
At the same time, Boards often assume regulation will act as a safety net. That assumption is increasingly unsafe. In the United States, the direction of travel has been towards reducing regulatory friction in order to accelerate innovation. Federal policy has moved towards a lighter touch framework, while also attempting to override stricter state level initiatives. The result is fragmentation, inconsistency and, in some areas, less oversight rather than more. There is no stable external framework that Boards can rely on.
This creates a governance gap. Boards remain accountable to shareholders for risk, compliance and long term value, but the mechanisms they rely on were not designed for a technology that is decentralised, fast moving and often opaque. Traditional governance assumes that systems are known, risks are bounded and controls can be applied centrally. AI challenges all of those assumptions.
The consequences are already visible. Organisations are discovering multiple, uncoordinated uses of AI across departments, often involving overlapping data sets and inconsistent outputs. In some cases, AI models have influenced material decisions for months before being formally reviewed. In others, confidential information has been shared inadvertently through public AI tools by employees trying to work more efficiently. These are not failures of intent, they are predictable outcomes of powerful tools meeting human behaviour.
So what should Boards do? The first step is to accept reality. AI is already embedded across the organisation. The question is not whether it is being used, but where and how. Boards need to move beyond high level strategy discussions and develop a clear view of where AI is influencing decisions, processes and data flows. They need to redefine accountability so that AI is owned at an executive level rather than sitting ambiguously between IT and the business. They must build enough internal capability to ask informed questions, not to become technologists, but to understand how these systems behave and where they can fail. Most importantly, they need to act ahead of regulation rather than waiting for it.
I have seen this pattern before. In the early days of the internet, many organisations believed they could control its impact. They treated it as a channel rather than a transformation. They were wrong then, and there is a real risk of being wrong again.
The difference this time is that AI is not just changing how organisations operate. It is changing how they think, decide and act. Boards that recognise this early will adapt and retain control. Those that do not will continue to believe they are in charge, right up to the point where they are not.
Pat Chapman-Pincher