I want to speak to the manager of your bot

The proliferating field of AI is full of mythology or if you prefer malarkey.

That’s not to say there isn’t substance and genuine innovation in AI, there certainly is. However quite like crypto, that substance often lies beneath a heavy layer of hype and hoopla that can be difficult to decipher and wade through.

To make matters worse, AI is now making major strides towards automatically generating content, whether words or images, and it’s not entirely clear how this content will be used.

Perhaps an easy assumption is that automatically generated content will be used to promote automation, primarily for an audience of automated consumers!?

Yet let’s not discount the value of bullshit. Sometimes passable text is good enough, especially if it will be skimmed rather than read.

Similarly we’ve long recognized that politics and entertainment is largely based on bullshit and does not require a whole lot of substance to be successful.

Here’s what I got ChatGPT to write regarding the role of mythology in contemporary politics:

Through the use of myth, politicians can craft narratives that help people make sense of complex issues and events. By drawing on the power of story and symbol, politicians can create compelling visions of the future and inspire people to take action.
In addition, myth can be used to build a sense of community and shared identity among supporters. By creating a common set of beliefs and values, politicians can rally people around a shared cause and create a sense of solidarity and unity.
Furthermore, myth can be used to delegitimize and demonize political opponents. By portraying them as enemies of the people or threats to society, politicians can undermine their credibility and make it easier to justify their own actions.

Not bad given the minimal effort. What if instead of a world led by poets and philosophers, we’re instead heading towards one that is about superficial appearance and bullshitting the bullshitters?

Perhaps ChatGPT and the technologies that underlie it are less about persuasive writing and more about superb bullshitting. A bullshitter plays with the truth for bad reasons — to get away with something. Initial response to ChatGPT assumes as much: that it is a tool to help people contrive student essays, or news writing, or whatever else.

I find this explanation to be quite plausible as Futurism is a field dominated by speculative bullshit.

This is largely why I resisted calling myself a futurist for so long, I didn’t want to be associated with the quakery and snake-oil salesmen that dominate the field.

Yet if we treat predictions and visions of the future as entertaining if not fluff then it makes it easier to regard these efforts as playful thought provocations rather than prophecies.

For example, take this prediction around education:

These are not novel predictions, nor are they entirely based on AI or ChatGPT. Rather they reflect a return to a humanist conception of education that seeks to foster critical thinking and agency rather than regurgitation and obedience.

Which begs the question, will these automated tools help us return to a human centric style of communicating, or take us further away?

The cliche that AI will take our jobs is both lazy and unnecessary. Unless of course we imagine a world where machines will be communicating directly with machines, removing humans from the loop entirely?

This may be the ultimate danger of automating mythology by allowing machines to bullshit on our behalf. We not only normalize myths over accuracy but also abdicate our ability to distinguish between what is real and what is an illusion.

While ChatGPT will clearly have a profound impact on our use (and misuse) of language, I predict it will not be as most currently predict. Rather like any pandora’s box, it’s impact will both eclipse and defy our expectations. In good ways and bad.