Whose future is it anyway? (Part 1: MAD, MAP, and the illusion of choice)
There’s a framing I’ve been coming back to recently, borrowed from Mo Gawdat.
During the Cold War, the dominant paradigm was MAD — Mutually Assured Destruction — but Gawdat suggests that with AI, we have the potential for something different:
MAP — Mutually Assured Prosperity.
It’s a neat idea, but I’m not convinced it’s that simple.
A revolution, but faster
The scale of change we are talking about is not incremental.
The AI “revolution” could be as fundamental as the Industrial Revolution — but compressed into years rather than decades, with far less time for systems, institutions, and people to adapt.
We’re not just dealing with scale, we’re dealing with speed, and that speed changes everything.
- Decisions compound faster
- Advantages accumulate more quickly
- Mistakes propagate further before they are understood
Historically, societies have had time to absorb technological change. This time, that buffer looks much thinner.
MAD or MAP?
There’s a neat symmetry in Gawdat’s framing.
MAD or MAP. Destruction or prosperity.
It’s compelling. It’s memorable. It gives us a sense that there are two clear paths.
It also implies that the outcome is, in some sense, shared.
That’s where it starts to break down, because if you look at what's actually happening, we are already seeing elements of both outcomes emerge at the same time.
Massively accelerated disruption
If we adapt the framing slightly, MAD becomes something more immediate:
Massively Accelerated Disruption
AI is compressing time:
- Time to learn
- Time to build
- Time to ship
That creates enormous opportunity, but it also creates pressure.
- Entry-level roles begin to disappear
- Mid-level work is compressed
- Systems that relied on gradual progression start to fracture
This is not a distant scenario, it's already happening.
The promise of prosperity
At the same time, the case for MAP is not hard to make.
- Scientific discovery is accelerating
- Access to knowledge is expanding
- Small teams can achieve what previously required large organisations
- Entire categories of work are becoming more efficient
There is a plausible path to a world where:
- The cost of goods and services falls
- Productivity rises
- More people have access to capability
That path also exists, but it is not guaranteed.
The missing dimension
What both MAD and MAP assume is a degree of symmetry — mutually assured outcomes.
That's not how systems like this usually behave.
The more useful question is not whether AI leads to disruption or prosperity.
It’s:
Who experiences the disruption, and who captures the prosperity?
Because those are not the same groups.
The illusion of a shared future
AI is often discussed as if “the future” is a shared destination — something we are all heading towards, more or less together.
That has never really been true, and it's even less true now.
At this point, it feels almost mandatory to quote William Gibson:
“The future is already here — it’s just not evenly distributed.”
It’s one of those lines that risks being overused precisely because it is so consistently accurate.
If anything, AI is accelerating that unevenness.
Different people, companies, and countries are already experiencing very different versions of this transition:
- Some gain extraordinary leverage
- Some see their work automated away
- Some have access to tools and infrastructure
- Others are dependent on systems they do not control
This is not a single trajectory, but rather a set of diverging ones.
Speed, capability, and agency
Part of what makes this moment difficult to reason about is that several forces are moving at once:
- Capability is increasing rapidly as AI breaks down the requirement for deep knowledge acquisition, as is capacity now that AI tools are in so many — but crucially not all — hands.
- Agency is concentrating around those who control capital, compute, and platforms, with only those with accesss to the tools and the empowerment for action being able to truly take advantage.
- Speed of change is accelerating, and clashing with structure that was designed for a much slower era.
- Time for adaptation is shrinking, and the tempo of progress, delivery, and disruption is rising.
They don't move in lockstep, and when they fall out of alignment, systems become unstable.
That is as true for engineering teams as it is for economies.
A different kind of question
So rather than asking whether AI leads to destruction or prosperity, a more useful framing might be:
- Who is building the systems?
- Who controls them?
- Who benefits from them?
- Who is left out?
Or, more simply:
Whose future is it anyway?
What comes next
In the next post, I’ll look at how this plays out at a global level.
If AI becomes a — or the — primary engine of productivity, then access to capability is no longer just a technical question, it becomes an economic one.
And potentially, a geopolitical one.
The future is not a single thing that arrives all at once - it's built, funded, and distributed.
The outcome will depend less on what is possible, and more on who gets to decide how it is used.
