Somewhere in the last two years, your organisation made a decision that felt like the right move.
Leadership looked at the AI conversation happening across the industry, looked at the pressure coming from the board, looked at the budget cycle approaching, and decided to take the question seriously.
By establishing a Centre of Excellence (CoE), hiring or seconding staff, defining a formal mandate, and allocating a dedicated budget, the organization finally created a definitive answer to the question of its AI strategy.
That answer is now one of the clearest signals that AI adoption inside the organisation is going to be slower than anyone planned.
This is not a criticism of the people inside the CoE. Most of them are capable, committed, and working hard on genuinely difficult problems.
It is a structural argument, and structural arguments are more uncomfortable than personal ones because they implicate the decision-makers who designed the structure rather than the people trying to operate within it.
The Centre of Excellence is not failing because it is poorly run. It is failing because it was designed to solve the wrong problem.
What the CoE Was Actually Built to Do
To understand the problem, you have to go back to the moment the CoE was created and ask honestly what need it was meeting.
When a board or an executive committee asks what the organisation is doing about AI, that is not primarily a capability question but an anxiety question.
Leadership is looking for evidence that the organisation is not falling behind, that the AI conversation is being taken seriously, that there is institutional ownership of the issue.
A Centre of Excellence answers that question perfectly. It is a named unit with a budget line, a leadership structure, and people whose job titles include the word AI.
It is an institutional answer to an institutional question. It can be pointed to in board reports and featured in annual reviews. It makes the anxiety manageable by making it visible that something is being done.
The problem is that managing leadership anxiety and driving organisational adoption of AI are two different goals.
The structure built to do the first is almost precisely wrong for doing the second. A structure designed for visibility optimises inward, toward the CoE’s own activities, its pilots, its reports, its capability demonstrations.
Also read, Sustainable AI: Strategies for Long-Term Adoption and Impact
A structure designed for adoption optimises outward, toward the departments, workflows, and people where AI actually needs to take root. These are not the same direction, and a single team cannot face both ways at once.
The Bottleneck Nobody Named
Here is what actually happens after a CoE is established. Departments with AI ideas or needs are directed toward the central team.
The CoE evaluates requests, manages priorities, allocates technical resources, and runs pilots.
During the first six months, the assessment of cases, launching of pilots, and drafting of reports create a sense of progress that gives the outward appearance of genuine momentum.
The CoE is a finite team managing the AI ambitions of an entire organisation. Every department that wants to explore AI now has a single point of access with limited bandwidth.
The more seriously the organisation takes AI, the more requests flow toward that single point. The more requests flow in, the longer the queue becomes.
The more frustrated the departments waiting in it, and then one of two things happen.
Departments stop asking, and AI adoption quietly dies at the department level because the process is too slow to be worth the effort. Or they stop asking through official channels and start experimenting on their own, with tools the CoE knows nothing about, governed by nobody, creating the exact fragmentation and risk the CoE was supposed to prevent.
The painful irony is that a successful CoE accelerates this problem. Its success attracts more demand and this in return deepens the bottleneck. The better it performs, the more thoroughly it blocks the adoption it was built to enable.
The Distance That Cannot Be Closed From the Centre
There is a second structural problem alongside the bottleneck, and it is harder to fix because it is epistemic rather than operational.
The CoE sits at the centre of the organisation, the work happens at the edge. The procurement team processing contracts, the case workers managing complex caseloads, the compliance officers reviewing documentation, the field teams generating operational data. But the CoE does not inhabit their world, it only observes from a distance.
That distance matters when it comes to designing AI solutions that actually get used. Adoption does not happen because a tool is technically impressive.
It happens because a tool fits precisely into an existing workflow in a way that makes the person using it feel that their work is easier, not more complicated.
That kind of fit requires intimate knowledge of the workflow, the language people use inside it, the friction points that drive them to distraction, the workarounds they have built over years.
A central team cannot hold that knowledge for every department simultaneously. It can learn enough to design a pilot. It rarely learns enough to design something that sticks.
This is why CoE-built pilots so often look promising in controlled conditions and then stall when they reach the people they were built for.
The gap between what the CoE understood about the workflow and what the workflow actually is only becomes visible at the moment of deployment. By then, the CoE has moved on to the next pilot, and nobody is left to close the gap.
The Metrics That Hide the Problem
Organisations typically discover their CoE is underperforming only when someone senior asks an awkward question.
The reason the problem stays hidden is that CoEs are almost always measured on the wrong things.
The standard metrics remain internally focused, tracking the number of pilots completed, use cases identified, tools evaluated, and workshops delivered.
These are all measures of the CoE’s own activity, and they can all trend upward impressively while AI adoption across the organisation remains shallow and fragile.
A CoE can complete twenty pilots in a year and have meaningful AI embedded in none of the departments those pilots were built for.
By its own metrics, it has been highly productive. By the metric that actually matters, it has delivered very little.
The CoE is accountable upward, to the leadership that created it and funds it. Leadership wants to see activity.
Activity is what the CoE reports. The departments the CoE is supposed to serve rarely have a formal mechanism to report that the CoE is not working for them, because the relationship was never designed to be accountable in that direction.
So the gap between CoE activity and organisational adoption can persist for years before it surfaces as a named problem rather than a vague sense that AI is not moving fast enough.
What Genuine Adoption Actually Requires
AI adoption in large organisations does not spread from the centre outward. It takes root at the department level, driven by people who understand a specific operational problem well enough to see how AI changes it, and who have enough AI literacy to act on that understanding themselves or with minimal external support.
What that requires is not a central team that everyone reports to. It is embedded capability, people who sit inside departments and hold operational knowledge and AI literacy simultaneously.
It is permission structures that allow departments to experiment without routing every decision through a central approval process.
It is a relationship between the central function and the departments that resembles infrastructure provision more than project management.
The organisations where AI is genuinely spreading through operations are not the ones with the most impressive CoEs.
They are the ones where AI literacy has been pushed outward into the workforce, where departments have enough capability to identify and develop their own use cases, and where the central function acts as an enabler of that activity rather than its gatekeeper.
From Centre of Excellence to Foundation of Enablement
The structural reframe being described here is not about dismantling the central AI function. Large enterprises and government bodies need central coordination for governance, data standards, procurement frameworks, shared tooling, and risk management. The question is what that central function is optimising for.
A Centre of Excellence concentrates expertise in one place and makes everyone else come to it.
The alternative concentrates infrastructure centrally and pushes capability outward. The central team stops being the place where AI happens and becomes the place that makes it easier for AI to happen everywhere else.
It builds the shared foundations that departments can build on without starting from scratch. It maintains the governance frameworks that give departments safe space to experiment. It trains and embeds AI capability into the workforce rather than retaining that capability at the centre.
Not how many pilots the team completed, but how many departments are running their own. Not how many tools the team evaluated, but how quickly departments can now evaluate tools themselves.
Not how many workshops the team delivered, but whether AI literacy across the organisation is growing independently of the central team’s direct involvement. These are metrics that point outward toward adoption, rather than inward toward activity.
The Question Leadership Needs to Ask
The practical implication for anyone who has a CoE, is building one, or is being asked to approve one, is a single audit question: what is the CoE actually optimising for?
If the honest answer is that it is optimising for visibility, for the ability to report upward on AI activity, for the management of leadership anxiety about whether the organisation is taking AI seriously, then it is a comfort structure.
It may be a well-run, highly capable, genuinely impressive comfort structure. But it is not primarily an adoption engine, and expecting it to be one will produce disappointment on a predictable timeline.
If the CoE’s success is measured by adoption rates in the departments it serves, by the growth of AI literacy across the workforce, by the reduction of its own involvement as departments become self-sufficient, by the spread of AI into daily operations rather than its concentration in a central team, then it has the structural DNA to become something more useful.
Most CoEs were not designed with those metrics. Redesigning around them requires a different conversation with the board than the one that created the CoE in the first place.
It requires explaining that the sign of success is not an impressive central team but a less necessary one.
That is a harder sell because it is also the only version of this story that ends with AI genuinely embedded in how the organisation operates, rather than beautifully managed in a dedicated unit that everyone respects and almost nobody actually uses.

