Skip to content Skip to sidebar Skip to footer

The Last Mile Problem in Enterprise AI

EnterprIse AI

Why Brilliant Pilots Never Reach the People Who Need Them

 

With the AI pilot launching to genuine excitement and delivering strong results, the business case holds up and leadership remains satisfied with the progress.

And then the pilot reaches the people it was built for, the frontline workers, the departmental teams, the operational staff, and then it quietly stalls.

Adoption of the new tool is thin and manual workarounds persist; consequently, the technology sits underused while the organization spends its time debating the failures of the rollout.

The organisation concludes that the product was right but the delivery was imperfect, and it invests in persuading people to use something they have so far declined to embrace.

Sometimes this works, partially, temporarily. More often it produces a thin compliance adoption where people use the tool when observed and revert to their old methods when not.

What almost never gets diagnosed is the possibility that the pilot was not, in fact, brilliant but not to the people it was built for.

That the problem it solved was the organisation’s approximation of the frontline reality, assembled at a distance, and that the frontline workers who declined to adopt it were not being resistant or technophobic.

They were responding rationally to a tool that did not quite fit the problem they actually live with every day.

The last mile is not where adoption dies. It is where the consequences of decisions made much earlier become impossible to ignore.

The Direction Everything Moves In

Enterprise AI appears to have a downward direction. Problems are identified by leadership, or by central strategy teams, or by consultants engaged to find high-value use cases.

Solutions are designed by technical teams, often with vendor involvement. Pilots are run in controlled conditions against metrics chosen by the programme team.

And then the result is pushed toward the people at the operational edge, who were, if they were lucky, consulted somewhere in the middle of that process.

Genuine adoption moves in exactly the opposite direction. It starts with the specific friction a frontline person experiences on a Tuesday afternoon, something concrete and recurring that makes their work harder than it needs to be.

It builds upward from operational reality toward a solution, rather than downward from strategic intent toward a deployment target.

These two directions produce different things. One produces solutions that make sense from a altitude, while the other produces solutions that make sense at ground level.

A large organisation can run the most sophisticated AI programme in its sector and spend every penny of it solving problems that look important from the executive floor and feel irrelevant on the operational floor.

The directionality is not a minor process preference. It is the design decision that determines whether the last mile problem exists at all.

Who Gets to Decide What Is Brilliant

When an enterprise AI pilot is declared successful, it is worth asking precisely who made that declaration and what they measured it against.

In most large organisations, the answer is the commissioning team, the people who defined the use case, selected the vendor, set the success metrics, and had the most to gain from a positive result.

These are not neutral evaluators as they have reputational and political stakes in the pilot performing well, and they chose the conditions under which performance would be judged.

The frontline people who will eventually use the tool had no meaningful role in any of those decisions.

They did not define what problem the pilot would address nor choose the metrics. They were not present when success was declared.

And so when the tool reaches them and they find it does not quite fit the way the work actually runs, the organisation is genuinely confused.

A tool that performs well against metrics chosen by people who do not do the work is not a brilliant pilot with a distribution challenge.

It is an untested hypothesis about what the work requires, validated in conditions that did not include the people closest to that work. The last mile is simply the moment the hypothesis meets reality.

The Consultation Illusion

Most organisations that have run AI pilots believe they involved frontline people in the design process.

They ran workshops, gathered requirements, and sent surveys. And from the programme governance perspective, the box for stakeholder engagement was ticked in good faith.

What those activities almost never constituted was genuine co-design. Being asked what you need is not the same as having the authority to define what gets built.

When frontline people are consulted after the strategic direction is already set, after the vendor relationship is in place, after the use case has been locked into the programme plan, their input can adjust details.

Real co-design means frontline people are present at the moment the problem is being framed, before any solution direction exists, when their understanding of the work can actually shape what gets built rather than refine something already decided.

That is not what happens in most enterprise AI programmes. The gap between being consulted and having authorship over problem definition is precisely where the last mile failure is manufactured, quietly and with good intentions, months before anyone notices.

What Leadership Cannot See From Where It Stands

There is a category of operational knowledge that exists at the frontline of every large organisation and is almost entirely absent from the rooms where AI programmes are designed.

It is not exotic knowledge, does not require specialist expertise to hold, rather it is the accumulated intelligence of people doing the actual work every day, and it is invisible at altitude.

Frontline workers know which official processes are followed and which are quietly circumvented because they do not fit reality.

They know which parts of the job consume disproportionate energy relative to their value. They know the workarounds that have become so embedded they are no longer recognised as workarounds.

They know what they would change first if anyone asked them seriously and then actually listened.

An AI solution designed without access to this knowledge is solving an abstraction. It is addressing the organisation’s model of the work rather than the work itself.

Those two things are often close enough to look identical from a distance and different enough to produce non-adoption when they meet.

The last mile is simply the moment the hypothesis meets reality. And reality, as it turns out, was not consulted.

The Cost That Keeps Compounding

When a pilot stalls at the frontline, the organisation’s standard response is to invest in the delivery layer.

Change management programmes. Training interventions. Internal communications campaigns. Adoption incentives.

The organization begin to deploy: change management programmes, targeted training interventions, internal communications campaigns, and adoption incentives.

These responses share a common assumption: that the product is right and the problem is persuasion.

They treat non-adoption as a human failure rather than a design signal.

Resources are spent trying to close a gap that cannot be closed by communication alone, because the gap is not between the tool and people’s willingness to use it. It is between the problem the tool was built to solve and the problem people actually have.

The deeper cost is harder to quantify and more damaging over time. Every AI initiative that reaches frontline people and fails to genuinely improve their work withdraws from a trust account the organisation did not know it was holding.

The next pilot arrives into an environment where people have learned, from direct experience, that the organisation’s AI programmes are built for someone else’s understanding of their job.

They comply when required. They do not adopt. And the organisation, frustrated by persistent adoption challenges across multiple programmes, doubles down on change management rather than examining the design assumptions that produced the pattern.

The trust deficit that accumulates through repeated top-down design failures is one of the most underestimated costs in enterprise AI.

It does not appear on a programme budget. It shows up as a cultural resistance to AI that leadership then treats as another problem to be solved with communication.

Building From the Last Mile Inward

Moving away from a top-down approach doesn’t mean giving up on central planning or leadership. The real issue isn’t whether leaders should help design AI programs, but rather determining exactly when their expertise should take the lead.

Strategic knowledge, the kind leadership holds, is most valuable when it is setting the organisational context, the resources, the governance boundaries, and the broad areas where AI investment makes sense.

Operational knowledge, the kind frontline workers hold, is most valuable when it is defining the specific problem within those broad areas that AI should actually address.

The order in which these ideas are shared is important because they follow a specific sequence. If you reverse that order you end up with the exact failure described in this article.

The first meaningful investment in any AI initiative is time spent inside the operational reality it is supposed to improve.

Not to gather requirements in a workshop, but to understand the work well enough to know what would actually make it different. It means the specific problem definition is owned by the people closest to the work, within a strategic frame set by leadership.

It means pilots are evaluated not on technical performance metrics chosen by the programme team, but on whether the people who use the tool find it genuinely indispensable within thirty days of real use.

If they do not, that is not a change management problem. It is a design signal that the problem definition needs to be revisited.

The Governance Change This Actually Requires

None of what has been described here is achievable through a mindset shift alone. Organisations cannot think their way out of a structural problem.

As long as formal authority for problem definition sits with the budget holder, the directionality of AI programme design will reproduce itself regardless of how much leaders talk about the importance of frontline input.

The change that makes the rest possible is governance-level. It means building frontline problem definition authority into the programme structure from the beginning, with real decision-making power rather than advisory status.

It means creating accountability mechanisms that run outward toward the people the programme serves, not only upward toward the leadership that commissioned it.

It means measuring programme success in ways that frontline workers would recognise as meaningful.

To make this work, leaders need to change how they run programs so that the people who actually do the work are involved in making decisions, rather than just being told what to do after the fact.

Leave a comment