The emperor loves the sounds of his own voice. Yet he cannot see that this voice is much like his new clothes, an illusion that illustrates his inherent vanity and vulnerability.

He is speaking confidently about artificial intelligence.
He is announcing procurement deals.
He is demanding safeguards be lifted.
He is expressing outrage that a system did not foresee tragedy.
He is celebrating “personalized learning” powered by models he cannot explain.

The machine he uses is fluent. We used to call this machine a teleprompter, but now it says it can do so much more. It answers smoothly. It completes sentences. It produces policy memos, battlefield summaries, and lesson plans with equal composure.

The crowd cannot tell whether the emperor or the machine actually understands the words they use.

The answer is neither.

Fluency Is Not Intelligence

Large language models do not think. They do not know. They do not predict in the human sense.

They generate statistically plausible text based on patterns learned from massive datasets. They estimate the next likely word. They shape output through alignment processes designed to reduce harm. They can appear coherent, reflective, even wise.

But they do not possess intent. They do not possess foresight. They do not possess comprehension.

Their safety features, what their proponents call “guardrails”, are not switches. They are probabilistic pressures inside a vast statistical landscape. Adjusting them alters behavior in ways that are rarely clean or contained.

We’re dealing with complex systems that manipulate language in ways we’re only starting to understand. Like children who love their stuffed animals, we’re projecting meaning and intelligence on systems that do not operate in the manner we think they do.

And yet, across military command, public safety policy, and educational leadership, decisions are being made as though fluency equals intelligence, and as though intelligence equals control.

Consider three recent flashpoints.

1. Military Authority and the Illusion of Control

Anthropic, the company behind the Claude models, has resisted pressure from the United States Department of Defense to remove certain usage constraints on its systems. The Pentagon’s position is straightforward: if a model is legally acquired, it should be available for any lawful military purpose. Corporate restrictions should not limit sovereign power.

At first glance, this sounds like a policy dispute over ethics.

It is not. It is a dispute about what AI systems are.

The implicit assumption behind the Pentagon’s demand is that these systems are controllable instruments — that their boundaries can be cleanly adjusted to fit operational needs. Remove a restriction here. Permit a category there. Maintain stability everywhere else.

But alignment in large language models is distributed and statistical. Relaxing constraints in one domain can reshape behavior in others. There is no simple dial labeled “lethal operations” that can be turned without side effects.

To treat these systems as obedient, modular tools reveals a shallow technical model of how they function.

Legal authority is being conflated with technical mastery.

Control is being assumed where understanding is incomplete.

The emperor speaks of command. The machine remains probabilistic.

2. Public Safety and the Myth of Foresight

Here in Canada, the Tumbler Ridge tragedy forced a different confrontation.

After a deadly school shooting, reporting revealed that OpenAI had previously flagged and banned a ChatGPT account linked to the perpetrator for violent misuse. The company did not notify law enforcement at the time, judging the content insufficient to meet its threshold for imminent, credible threat.

In the aftermath, Canadian ministers expressed alarm and demanded stronger escalation mechanisms. The implication circulating in public discourse was clear: the system had signals. It could have known. It should have warned.

This narrative quietly assumes that AI systems possess meaningful predictive clarity about human intent.

They do not.

Content moderation systems operate on probabilistic risk signals. They detect patterns associated with harm. They cannot reliably distinguish fantasy from imminent violence, curiosity from capability, rhetoric from execution. False positives carry enormous civil liberties consequences. False negatives are inevitable in high-volume systems.

When political leaders imply that earlier reporting by the model might have prevented tragedy without acknowledging these limits, they reinforce a mythology of machine foresight.

The machine is being treated as a latent oracle.

Institutional outrage amplifies that belief.

And the public absorbs a distorted understanding of what AI can actually see.

3. Education and the Confusion of Fluency with Competence

Meanwhile, in an AI-powered private school, students are taught through systems that generate personalized explanations, assessments, and feedback. Administrators describe the environment as optimized, adaptive, efficient.

The model answers questions fluidly. It explains concepts in multiple ways. It appears attentive to individual learning paths.

But fluency is not pedagogy.

These systems can hallucinate. They can fabricate citations. They can embed subtle bias. They can reinforce shallow comprehension by rewarding pattern mimicry rather than critical reasoning.

To place children inside environments mediated primarily by probabilistic language engines requires deep literacy about those engines’ limits.

Instead, the narrative of personalization and innovation dominates.

Students become participants in a live experiment.

Authority presents the system as intelligent. Is obedience the desired outcome?

False Confidence as Governance Risk

Ignorance hesitates.

False confidence legislates.

Across these three arenas, a common distortion appears:

– The military overestimates controllability.
– Public officials overestimate predictive capacity.
– Educators foster compliance over critical thinking.

In each case, leaders speak with certainty about systems whose epistemic structure they do not fully grasp.

This is not a fringe problem nor is it about conspiracy or hype cycles.

It is about competence.

Authority historically rests on the assumption that those in power understand the tools they wield. Industrial governance required industrial literacy. Nuclear deterrence required theoretical literacy. Financial oversight required mathematical literacy.

AI governance requires probabilistic literacy.

Not the ability to code.
Not the repetition of vendor talking points.
But fluency in core properties:

That outputs are statistical approximations.
That alignment is fragile and dynamic.
That adversarial manipulation is constant.
That hallucination is structural, not exceptional.
That capability claims often exceed empirical stability.

Without that literacy, power becomes theatrical.

Policy rests on mischaracterization.
Procurement rests on assumption.
Public discourse rests on exaggeration.

And each confident statement by a minister, a general, or a head of school launders misunderstanding through the prestige of office.

Institutional speech is now a vector of distortion.

Literacy as the Prerequisite of Power

This is the deeper vulnerability now emerging.

AI systems are inherently imperfect, probabilistic, and susceptible to manipulation.

Modern leadership structures reward decisiveness, narrative clarity, and projection of control.

When these two realities collide, misperception compounds.

The machine’s fluency conceals uncertainty.
Authority’s fluency conceals illiteracy.

For a time, both can coexist. The spectacle holds.

But as AI becomes embedded in warfare, crisis response, and childhood education, the cost of epistemic shallowness rises.

The emperor is not naked because the machine failed.

He is naked because the machine revealed the thinness of his understanding.

AI is not destabilizing society because it is too powerful.

It is destabilizing authority because it exposes how little power understands the systems it now depends on.

And in that exposure lies the real crisis:

Literacy is no longer assumed as a prerequisite for command.

Until it is, fluency will continue to masquerade as competence — and the machine will keep speaking more clearly than the throne.

Tiktok failed to load.

Enable 3rd party cookies or use another browser