…When They Should Be Asking ‘What Are We Allowed to Do?’…
There is a particular kind of energy that enters a boardroom when someone opens a presentation on AI capability.
The demos are impressive, the use cases are vivid. The projected efficiencies are large enough to justify the budget line.
Questions get asked about timelines, about vendors, about which department goes first. By the end of the session, a programme is taking shape and a mandate is forming.
What almost never gets asked in that room is the other question. Not what AI can do, but what the organisation is actually allowed to do with it.
That question lives down the corridor in the legal department, or in a risk committee that meets quarterly, or in a compliance review that gets scheduled after the strategy is already set.
By the time it arrives, the architecture is built, the vendors are contracted, and the pilots have been run.
The boundary question shows up as a review, not a design input. And that sequencing mistake is costing large organisations far more than they realise.
This is not a story about AI being dangerous or regulation being obstructive. It is a story about leadership asking questions in the wrong order, and the specific, expensive consequences that follow when they do.
Why the Wrong Question Always Wins
The dominance of the capability question is structurally engineered by the way organisations reward their leaders.
Asking ‘what can AI do?’ generates visible momentum. It produces demos, pilot results, vendor presentations, and proof-of-concept reports that can be shown to boards and ministers as evidence of progress.
It makes the leadership team look decisive and forward-thinking. It attracts budget because it promises return.
Asking ‘what are we allowed to do?’ generates caution as it slows timelines. It surfaces problems before solutions are ready. It brings lawyers into rooms where technologists want to be having a different conversation.
In the short term, it makes leadership look risk-averse rather than visionary. The incentive architecture inside most large enterprises and government bodies systematically punishes the second question and rewards the first.
Organisations sprint toward capability and tiptoe around boundaries, right up until the boundaries become unavoidable.
At that point, what looked like an AI programme failure is actually a sequencing failure, one that was baked into the process long before any technology was deployed.
What ‘Allowed’ Actually Means
Part of the reason boundary questions get deferred is that organisations tend to think of them narrowly, as a legal and compliance matter that the relevant teams will handle in due course.
But at the scale of large enterprise or government AI deployment, ‘what are we allowed to do?’ operates across at least four distinct dimensions, and most organisations are only seriously stress-testing one or two of them.
Legal and Regulatory Exposure
This is the dimension that gets the most attention, and still gets under-examined.
Data protection law, sector-specific regulation, liability frameworks for automated decisions, and the shifting terrain of AI-specific legislation across jurisdictions all create boundaries that are not static. What is permissible today may not be permissible in 18 months.
A programme that does not build regulatory monitoring into its ongoing operations is not managing this dimension, it is ignoring it and hoping.
Data Provenance and Sovereignty
Many organisations discover late that the data underpinning their AI models was not as clean, consented, or domestically held as they assumed.
For government bodies in particular, data sovereignty is not a technical preference. It is a legal obligation. Building AI capability on data that cannot withstand scrutiny of its origins is building on sand, and the tide always comes in.
Workforce and Union Agreements
Large enterprises in established industries often have collective bargaining agreements, employment contracts, and consultation obligations that govern how technology can be introduced to change working conditions.
These do not block AI adoption, but they impose a process that cannot be skipped without legal consequence.
Organisations that treat workforce consultation as something to manage after the technology is selected, rather than something to design around from the beginning, tend to find that process becoming adversarial and expensive.
Public Accountability
A government department that deploys AI to assist in decisions about benefits, licensing, immigration, or public safety is not just taking a technology risk, It is taking a constitutional one.
The question of whether the decision-making process remains legally auditable, explainable to affected citizens, and defensible under judicial review has to be answered before deployment, not after a legal challenge forces the question.
When the Costs Become Real
A large financial services organisation spent 14 months building an AI-powered credit assessment system.
The capability work was sophisticated and the results in testing were strong. The system went into limited deployment and performed as modelled.
Six months later, a regulatory review found that the model’s training data contained historical lending patterns that embedded demographic bias, and that the organisation had not conducted the required algorithmic impact assessment before deployment.
The system was suspended and the regulatory relationship was damaged in ways that affected unrelated business activities for the next two years.
The capability question had been answered exceptionally well. The boundary question had not been asked at the right stage.
The legal team reviewed the system after it was built. What they needed was to be designing boundary requirements before the architecture was set.
Those are not the same function, and one of them does not exist in most organisations’ AI programme structures.
The cost of that sequencing failure was not just financial. It was the chilling effect on the next AI initiative.
The board that had been enthusiastic 14 months earlier was now cautious. The appetite for bold AI investment narrowed.
The organisation moved slower for two years after a failure that had nothing to do with AI capability and everything to do with the order in which questions were asked.
This Is a Leadership Problem, Not a Legal One
The most important reframe in this conversation is that boundary questions are not legal department problems that leadership hands off.
They are leadership problems that legal departments can help answer and this determines whether those questions arrive early enough to shape the programme or late enough only to review it.
When a CIO delegates the ‘what are we allowed to do?’ question to a compliance team and asks them to report back before launch, the compliance team is reviewing a finished product.
They can flag problems, but they cannot fix them without unravelling months of technical and commercial work.
Contracts need to be renegotiated, architectures need to be rebuilt and data pipelines need to be re-sourced.
The cost of fixing a boundary problem at deployment is an order of magnitude higher than the cost of designing around it at inception.
The organisations that get this right are the ones where the leaders asks the boundary question in the same meeting where the capability exploration is approved.
The capability team and the boundary team start work on the same day. They meet regularly. The boundary findings shape the capability design in real time, before anything is built that will later need to be dismantled.
That is not a slower process. It removes the rebuild cycle that currently consumes enormous resources in organisations that sequence these questions the conventional way.
What Asking the Right Question First Actually Looks Like
Resequencing these questions does not mean building a compliance process and calling it an AI strategy.
It means integrating boundary mapping into the earliest stage of programme design, at the same level of seriousness as capability discovery.
This means the legal, data governance, and workforce relations functions are in the room during initial scoping, not invited in later to review.
It means the programme design document includes a boundary map alongside a capability map.
It means the board is presented with both questions at the same time, with the same rigour applied to each. And it means success metrics include boundary compliance as a first-order measure, not an afterthought.
It also means that when boundary constraints narrow the scope of what is permissible, that narrowing is treated as useful design information rather than obstruction.
An organisation that discovers in month two that a particular data source cannot be used has learned something valuable at low cost. An organisation that discovers the same thing in month fourteen has a much larger problem.
The Strategic Inversion
The organisations moving most confidently on AI right now are not the ones asking the capability question most aggressively.
They are the ones that have answered the boundary question clearly enough to move without stopping.
Knowing what you are allowed to do is not a constraint on AI strategy.
It is the foundation that makes AI strategy structurally sound. Organisations that treat the boundary question as a strategic asset, something that informs and strengthens their AI programme rather than limiting it, are the ones that do not build and dismantle.
They do not lose board confidence after a high-profile stumble. They do not spend 14 months on a programme that a two-week boundary review would have redesigned into something defensible.
The capability question will always be the exciting one as it attracts attention, generates momentum, and makes for compelling board presentations.
But a programme built on capability alone, without a clear and early answer to what the organisation is permitted to do, is a programme that is running toward a wall it has not yet seen.
Ask the boundary question in the same room, on the same day, with the same seriousness as the capability question. That single change in sequencing is worth more than any governance framework written after the fact.

