Skip to content Skip to sidebar Skip to footer

AI Literacy Is Not a Training Course

AI Literacy

It Is an Organisational Capability That Takes Year to Build

 

Your organisation just completed enterprise-wide AI literacy training. With over 20 employees now certified and the board presentation showing a impressive 98% completion rate, leadership has ample reason to celebrate.

A couple of months later, an AI-recommended procurement decision costs $20, 000 because nobody questioned an output that domain expertise would have flagged as wrong in thirty seconds.

The problem is not that employees did not complete the training. The problem is that the training produced compliance, not capability.

It produced comfortable tool users who trust outputs and follow protocols, not people who can interrogate outputs, recognise failure modes, and exercise genuine judgment about when AI should not be trusted.

Those are different things. The organisation built one and measured the other, and the ₦40 million is the cost of that confusion.

What Compliance Looks Like and Why It Dominates

A compliance-oriented AI literacy programme has recognisable features, and if you have run one recently, most of them will be familiar.

It is designed around completion rather than capability. It is measured by participation rates, modules finished, and certificates issued.

Its content covers what AI is, how to use the approved tools, and what the relevant policies are. It is delivered at scale, within a defined timeframe, and produces a defensible record that training was provided.

Most enterprise AI literacy programmes running right now fit this description precisely, and the reason is not negligence.

Completion is measurable, scalable, and reportable to leadership in a format that board presentations can accommodate, however, capability is none of those things.

You cannot put genuine AI judgment on a slide with a percentage sign next to it. So organisations build what they can measure and report what they built, and the gap between that and actual capability remains invisible until something expensive makes it visible.

The Liability Management Function Nobody Names

There is an unstated primary function running underneath most enterprise AI literacy programmes, and it defines everything about how they are designed without ever appearing in their design documents.

When an AI-assisted decision goes wrong and the consequences become public or legal, the organisation needs to demonstrate that its people were informed. The training programme exists, in significant part, to provide that demonstration.

Designing a programme around that goal produces a programme optimised for documentation rather than understanding.

It sets completion as the meaningful outcome because completion is what the documentation records.

The organisation is not being dishonest about its intentions. It is responding rationally to an incentive that sits above the stated goal of capability development.

This results in training that checks a legal box and protects the company’s image, but fails to teach the real-world judgment it promised. This isn’t because the training was done poorly; it’s because the goal itself was wrong from the start.

Why Genuine Literacy Is Institutionally Inconvenient

Here is the contradiction that most large organisations have not examined seriously. A workforce with genuine AI literacy, deep enough to constitute real judgment rather than comfortable familiarity, will behave differently from a workforce that has been trained to be comfortable with AI tools.

In some situations, the process will naturally be slower and more inquisitive; furthermore, it will sometimes push back on automation decisions that leadership intends to fast-track.

Employees who genuinely understand how AI models produce outputs, and where those outputs become unreliable, will flag use cases where the model is being asked to do something it cannot do well.

They will catch outputs that look plausible and are wrong in ways that only domain expertise can identify. They will create friction in workflows that productivity metrics are designed to run smoothly.

This is exactly the behaviour that genuine AI literacy produces, and it is exactly the behaviour that most AI deployment models, with their change management programmes, their adoption targets, and their efficiency metrics, are designed to minimise.

Organisations say they want AI-literate employees, but their incentive structures want AI-accepting employees. The gap between those two things is where the compliance programme lives, and why it persists regardless of how many times leadership affirms its commitment to genuine capability development.

Organisations say they want AI-literate employees. Their incentive structures want AI-accepting employees. The gap between those two things is where the compliance programme lives.

What Genuine AI Literacy Actually Requires

The investment model for genuine AI literacy is not a better-designed training course. It is a different category of investment, measured in months and years rather than completion hours, and built into the conditions of daily work rather than delivered as a separate learning event.

Genuine AI literacy develops through sustained exposure to AI outputs in real work situations, combined with enough understanding of how models produce those outputs to develop calibrated scepticism.

Not general acceptance of AI recommendations and not general rejection of them, but the judgment to know when a specific output in a specific context should be trusted and when it should be questioned.

That kind of judgment does not come from a module. It comes from repeatedly exercising it, in situations where the stakes are real, in an environment where questioning an AI output is treated as a contribution rather than a delay.

That last condition is the one most organisations are not creating. Building genuine AI literacy requires the organisation to actively protect the space where employees can exercise and develop judgment, including the judgment to push back, without that pushback being managed as adoption resistance.

Most organisations have not built that space because building it conflicts with the deployment timelines and efficiency targets that AI programmes are being measured against.

The Measurement Problem That Keeps the Wrong Thing in Place

Organisations measure what they can count, and the metrics available for AI literacy programmes strongly favour compliance-oriented design.

From training completion rates and certificate issuance to survey satisfaction scores, these data points are all distinctly countable.

The depth of an employee’s AI judgment, their ability to recognise when a model is operating outside its reliable range, their capacity to catch a plausible-but-wrong output before it shapes a consequential decision, none of that is countable in a form that fits a quarterly report.

As long as AI literacy programmes are evaluated on metrics that completion-oriented training satisfies easily, the incentive to build something more demanding and less measurable stays weak.

The measurement framework is not a side issue in this story, it is the mechanism by which the compliance programme reproduces itself, cycle after cycle, despite the evidence accumulating that it is not producing what genuine AI capability requires.

Changing the outcome requires changing what gets measured, which requires leadership to accept that the new metrics will be harder to report and slower to move. That is a harder conversation than commissioning another training programme.

What the Gap Costs When It Meets Reality

Automation bias deepens across the organisation because employees have been trained to use AI tools confidently rather than to question them critically.

Errors that domain expertise would have caught pass through the human review layer because the humans in that layer have been optimised for acceptance rather than scrutiny.

Over time, the organisation deploys AI across more of its operations while simultaneously atrophying the human judgment capacity that would be needed to catch the AI’s mistakes.

This is not simply a literacy gap but a resilience gap. An organisation that has traded genuine AI judgment for AI compliance has made itself dependent on AI systems performing correctly in situations where it has reduced its own capacity to recognise when they are not.

The $20k decision from the opening is not an outlier. It is a preview of what happens at scale when the gap between compliance and capability is wide enough and the decisions are consequential enough.

The Choice Being Made

The leadership question this puts on the table is not whether to invest more in AI literacy programmes.

It is ifthe organisation is willing to build something that will sometimes slow down AI adoption, create friction in deployment, and produce employees who push back on AI recommendations rather than accept them.

That is what genuine capability development produces. If the answer is yes, the investment model and the measurement framework both need to change fundamentally, and leadership needs to be willing to defend a lower completion rate and a longer development timeline to a board that has been trained to see the 98% number as success.

If the answer is no, the organisation should at minimum be honest with itself about what it is building and why.

Compliance programmes serve real organisational needs, they reduce legal exposure and create a documented record of workforce engagement with AI.

They produce the numbers that board presentations require. These are legitimate organisational functions. They are not capability development, and treating them as capability development is the confusion that makes the gap invisible until it becomes expensive.

Your AI literacy programme is not an insufficient version of the right thing. It is a well-executed version of the wrong thing, designed to solve the problem the organisation was willing to solve rather than the problem it actually has.

The metrics you are tracking reveal which one you chose. 98% training completion and employees who trust AI outputs uncritically, or 40% deep capability development and employees who catch the mistakes that cost $20k. Both are achievable, only one of them requires the organisation to change something it finds convenient to keep.

Leave a comment