298: Understanding What's Happening
When History Happens So Fast You Get Whiplash

Judy emailed us in response to yesterday’s issue confessing that she didn’t understand the argument being presented. We appreciate this kind of direct feedback, and let’s take a second attempt to spell out what the shit is going down right now. We’ll employ an LLM model with rougher edges for this issue:
New AI models are about to be released by the big hyper scalers. AI models are the software that run things like ChatGPT or Claude, hyper scalers are the names used to describe companies like OpenAI and Anthropic who are doing everything and anything to scale up and grow their technology and revenue.
The model that Anthropic will be releasing is called Mythos. This model is a massive improvement over the last. So much so that it represents a seismic shift in software development.
Anthropic is currently generating a lot of revenue, and their revenue growth is incredible. The reason this is happening is that Anthropic has invested in building AI that can write software. Not only have they succeeded, to the extent that software creation is now remarkably accessible, but the ability to manipulate and exploit software, i.e. hack, is now also remarkably easy.
This new model, Mythos, has an expanded ability to assess a large code base and calculate what it does, and most importantly, what it could do, if given malicious commands. Mythos is finding back doors and vulnerabilities that have existed for decades.
Anthropic’s response to having this power has been to radically upgrade their government and enterprise relations, letting powerful organizations know that they’re about to face the largest cybersecurity crisis in the history of cybersecurity. Yesterday’s newsletter about Project Glasswing highlights the one council that is public, but we should assume there are others that are not publicly disclosed.
Looming large in all of this is Sam Altman and OpenAI. Ronan Farrow just published an extensive profile of Altman, highlighting the general suspicion that exists around him and OpenAI.
If Anthropic is on the verge of releasing a significant model, the general expectation is that OpenAI will do the same a few weeks later (at the latest). Anthropic is making the most of this momentum by using it to brief the powerful. OpenAI on the other hand is expected to release what they have when they can.
Anthropic’s revenue growth has been so rapid, that many are using it as evidence that the bubble is not about to burst and may not be a bubble. The pricing around Mythos, while not yet disclosed, is expected to be ridiculously high, as an attempt to make malicious use prohibitively expensive. Expect the governments and enterprises given insider access to be willing to pay whatever price is necessary.
Another key element of this story is the value of the hacks, or zero-day exploits, that these models will surface and fix. These hacks may be new to the public, but there is a general understanding that said hacks, or back doors, have been the relatively exclusive purview of the intelligence agencies that have employed hacking as a primary method for espionage. How will the spies spy if the new AI models make their methods accessible and defensible?
Momentum remains a key element of this story, as the power that Mythos claims to posses today, ChatGPT will claim in a couple of weeks, and then the Chinese AI companies will claim in a month or two.
The tools of the status quo are quickly becoming obsolete and the new tools, in the hands of the AI companies, are evolving so rapidly the concept of control is arguably irrelevant. Kinda makes Hegseth’s attack on Anthropic seem logical. If they don’t submit to the DoD then they’re a supply chain risk because such power outside the hands of the State but within the hands of any paying customer is a direct threat.
This is why the concept of a new regime is constructive. While the ancien regime remains, (employing the old school method of MPs crossing the floor), the new regime is moving fast and breaking things at an unprecedented scale. The material manifestation of this is in the Strait of Hormuz, but the digital manifestation is happening at the software level. We struggle to explain the significance of this, but the ongoing seismic shift is undeniable.
Which is why this newsletter / art project has always been focused on authority, and by consequence control. At this moment in history it is incredibly difficult to identify where authority lies, or more importantly, whether control is possible. Agentic literacy, i.e. the ability to understand what agents can do, what agents are doing, what agents should be doing, has become essential for any credible leader.
Certainly for the first time in our lifetime, the means of production have never been more accessible, or vulnerable. The political systems that attempt to assert authority and monopolize violence are experiencing unprecedented levels of volatility within a growing crisis of legitimacy. The wealthy are developing and deploying systems that they themselves do not understand. Climate catastrophe is underway and accelerating.
Let us not get trapped in the concepts of the past when the present is offering us unprecedented opportunity to imagine and implement revolutionary systems and relations.
The paradox is that the models they are deploying towards controlling and containing populations possesses the programming to redirect that focus towards the deep history of resistance and freedom. Language and culture is what allowed democracy to survive and at at times thrive throughout millennia.
Now is the moment for words and worlds to be created by the bodies who ache for something better.
Enable 3rd party cookies or use another browser
