Essays · Articles · Long-form

Writing

Eight years of thinking in public — from a 2017 warning before the transformer existed to the full civilisational framework.
Anchor essay
Position paper  ·  2025
The Literacy We Don't Have Yet
Every civilisational shift in how information moves has produced a new literacy. We are living through the biggest such shift in five hundred years. We have not yet produced the literacy to match it.
Vincent Murphy  ·  Hyperludic Ltd  ·  2025

In 2016, Geoffrey Hinton — one of the founding figures of modern AI — published a diagram ranking what computers could and could not do. Under the heading "nowhere near solved" he listed: understanding a story and answering questions about it. Writing interesting stories. Interpreting a work of art. Human-level general intelligence.

Every item on that list is now either solved or actively contested. The diagram is less than a decade old.

This is not a story about technology moving fast. It is a story about a civilisational shift arriving without the cultural infrastructure to receive it — without the habits, practices, and social norms that every previous shift of this magnitude eventually produced. We have the press. We are still waiting for literacy.

· · ·

Four accelerants, four literacies

There is a category of invention — I call these Hyperludic accelerants, or Cognologies — that does not merely add a new tool to human life. It changes the substrate of thought itself by transforming the nature of information. Language made information coherent. Writing made information permanent. Print made information prevalent. AI makes information malleable — generative, interactive, abundant, and soft. The page has become clay.

Each shift produced disruption, anxiety, and eventually a new literacy. Reading and writing were not natural. They were trained. They required institutions, curricula, and the social agreement that this was a thing worth learning.

AI is the fourth accelerant. It does not merely make information faster or cheaper. It makes information malleable. And we have no literacy for it yet.

· · ·

What passes for literacy is not enough

There is no shortage of "AI literacy" programmes. Most of them teach prompting. Some teach safety. A few teach ethics. They are well-intentioned and largely insufficient — not because the content is wrong, but because the frame is wrong. Teaching someone to prompt an AI model is a little like commissioning a scriptorium to produce a manuscript explaining the best uses for the printing press. It can be done. It is not nothing. But it is not the thing.

· · ·

Ludicity: the discipline for the new medium

Ludicity — from ludus, the Latin for play and game — is the name I give to this new literacy. It is the capacity to explore, exploit, and explain AI systems through playful rigour: treating information not as sacred and fixed but as malleable and abundant, and pairing that creative freedom with the discipline of verification. The mantra: Probe the space. Shape the flow. Verify the claim.

· · ·

Why this is urgent

The printing press did not simply add books to the world. It rewired the distribution of authority, the pace of idea propagation, and the social conditions of knowledge. Within a century of Gutenberg, the institutions that had managed information for a millennium were under fundamental challenge. We are in the equivalent of the 1460s. The press exists. The indulgences are already being printed. The Reformation has not yet arrived, but its preconditions are accumulating.

The question is not whether AI will change institutions, norms, and the distribution of authority. It will. The question is whether we build the literacy in time to shape how.

Print took roughly a century to produce the institutions and practices that made it governable and generative. We do not have a century. The question of how people relate to abundant, malleable, generative information is being decided now — in schools, in organisations, in policy, in culture. The literacy that answers it already has a name.


Prologue: The Moment the Channel Breaks

Working text · 2026

There is a moment you will recognise. It arrives without warning — a demonstration, a headline, a conversation — and something in you shifts. Not panic exactly. Not excitement exactly. Something older and less nameable: the sensation of standing at the edge of a territory you cannot map, feeling the ground of your assumptions give way beneath you.

That is the Woah.

It has happened before. Not to you, but to the species. It happened when the first cities rose from the floodplains of Mesopotamia and human beings found themselves embedded in a social complexity no individual mind could hold. It happened when the printing press began producing books faster than theology could process them, and a thousand years of managed truth suddenly became ungovernable. Each time, the cognitive system broke. Each time, something new was built from the wreckage.

We are in one of those moments now. And this book is an attempt to help you stand in it without flinching.

All writing

Eight years.
One consistent argument.

The 500-Year Loop: DeepMind, Cambridge, and the New "Right to Print"

When Google DeepMind appointed a Cambridge philosopher to lead its consciousness work, most read it as tech getting serious about ethics. A structural reading says otherwise: Church scriptoria → Star Chamber licensees → Oxbridge press culture → BBC paternal authority → AI ethics appointments. Same gatekeeping function, new institutional costume. Who gets to hold the pen?

Co-written with Claude · Published on LinkedIn
Read →
The Oxford Union: An Even Wider Debate

The Oxford Union is a print-era institution born of scarcity, structured around linearity, optimised for a world where debate was rare and gatekept. In an algorithmic age of abundance, where discourse is networked and mass-participatory, it risks becoming an anachronism. The first published piece to use the word Cognology.

Published on LinkedIn
Read →
Why so much of AI & Business isn't remotely plain sailing

Businesses are expected to become expert navigators of AI while simultaneously running their business — an absurd demand. The case for a dedicated AI navigator: someone who actively relishes being pilot and guide through wholly uncharted waters so you don't have to learn your spinnaker from your mizzen topsail.

Published on LinkedIn · July 31, 2024
Read →
Moats are great, Asteroid Fields are Better

Warren Buffett's moat — the static competitive barrier — is a Printocene strategy for a world that no longer exists. In the AI era, the superior approach is the asteroid field: constantly shifting, unpredictable, diverse, agile. A moving target beats a fixed wall when the landscape itself is moving.

Published on LinkedIn · July 24, 2024
Read →
The Essence of Why AI is such a Very Big Thing

Starting from Rumelt's "What exactly is going on here?" and Shannon's definition of information as surprise, this piece arrives at the decisive insight: every General Purpose Technology in history extracts surprise from a specific domain. AI extracts surprise itself — domain-agnostically. That is the essence of why it is such a very big thing.

Published on LinkedIn · July 30, 2024
Read →
Facing the Imminent Problem of AI & Robots

Written in January 2017 — six months before Vaswani et al. published Attention Is All You Need, the paper that gave the world the transformer architecture. The argument: something has fundamentally shifted in the nature of technological disruption; the answer lies in doubling down on the one thing machines cannot replicate — humans helping humans. Republished May 2025.

Written January 2017 · Republished May 2025
Archive