Jônadas Techio
← Essays
Essay·13 min read·

The Fiction Layer

On language, shared myths, and what large language models are actually disrupting.

AIPhilosophyHistoryLanguageTechnology

The most powerful technology humans ever invented is not a machine.

It has no moving parts, requires no energy source, and occupies no physical space. It can be reproduced at zero marginal cost. It operates across every culture, every language, every historical period. And its power depends entirely on a peculiar paradox: it only works if everyone believes that everyone else believes in it.

The technology is shared fiction. And it is, in a precise sense that this essay will try to establish, the infrastructure of civilisation.

I am writing this essay with the assistance of AI agents. That fact is relevant, not as a disclosure but as a demonstration. I was searching for sources, cross-referencing arguments, drafting and revising in collaboration with a language model when it occurred to me that I was enacting the very argument I was trying to make. Not because AI is a shared fiction, but because the tools I was using operate at exactly the layer I was trying to describe: the layer where shared fictions are produced.

That loop is where this essay begins.

The problem of scale

Yuval Noah Harari opens Sapiens: A Brief History of Humankind with a question that looks deceptively simple: how did a medium-sized primate, biologically unremarkable by most measures, end up running the planet?

Other great apes are stronger. Many animals have sharper senses. Several species use rudimentary tools. What Homo sapiens had, and has, that no other species possesses is the ability to cooperate flexibly in large groups of strangers.

The constraint that governs every other social species is what the anthropologist Robin Dunbar identified empirically: the cognitive limit on maintaining stable social relationships hovers around 150. Above that threshold, direct knowledge of individuals breaks down. Trust requires personal acquaintance. Cooperation requires knowing who can be relied upon, who owes what to whom, whose word is good. None of that scales.

Sapiens cracked this limit. The mechanism was not genetic; our biology has not changed significantly in 70,000 years. The mechanism was cognitive, and specifically linguistic.

We learned to talk about things that don't exist.

What shared fictions actually are

This is usually framed as mythology: the ability to believe in gods and spirits that resist empirical verification. But the deeper claim is more precise, and more consequential.

The philosopher John Searle spent much of his career working out the formal structure of what he called "institutional facts": social realities that exist not because of physics but because groups of people collectively agree to treat certain things as having certain statuses. His formula is compact: X counts as Y in context C. A piece of paper counts as legal tender in a given jurisdiction not because of any property of the paper itself, but because of a collective recognition that it does.

Harari makes the same point through narrative rather than formal philosophy. A banknote is, materially, printed fiber. It has no intrinsic use-value. It is valuable solely because of a shared belief: that this paper can be exchanged for goods and services, because everyone believes everyone else will also accept it. When that belief collapses, as it does in hyperinflationary crises and bank runs, the value evaporates. Not the paper. The belief.

The same structure applies to every other form that makes large-scale cooperation possible. A corporation is a legal entity with rights and obligations but no body, no continuous existence apart from the agreements that constitute it. A law is a collective agreement about what behavior will be sanctioned, which functions only as long as enough people treat it as binding. And the nation, as the political theorist Benedict Anderson argued in Imagined Communities, is a shared imagination of common identity among millions of people who will never meet, held together not by kinship or direct acquaintance but by the sense of simultaneous belonging to the same abstract community.

None of these things exist the way rivers, rocks, and trees exist. They exist intersubjectively, in the space of collective recognition. They are as real as any physical fact in their effects, but their reality is constituted by the agreement of minds, and it depends on that agreement being maintained.

This is what it means to call language the operating system of civilisation. Not metaphorically. Functionally. Money, law, corporations, states: these are not ideas about reality. They are the scaffolding that makes cooperative reality possible. They are written in language, sustained by language. To gain fluent generative access to language is to gain access to the layer where these shared realities are constructed and maintained.

Who governs the fiction layer

Throughout human history, the production and maintenance of shared fictions was the function of specific institutions. Religions maintained the myths that made large-scale moral cooperation possible. States maintained the legal fictions that made property, contract, and sovereignty possible. Editors, universities, and eventually broadcasters maintained the epistemic frameworks: shared standards of evidence and interpretation that made collective knowledge possible.

These institutions were not neutral. They were powerful precisely because they controlled access to the fiction layer. The authority to pronounce what was sacred, what was legal, what was true: this was always political authority of the deepest kind. Harari notes that falsifying money was historically treated as lèse-majesté: not merely fraud, but an attack on sovereignty. Because money is the sovereign's fiction. To counterfeit it is not simply to steal. It is to usurp the power to create shared reality.

Anderson adds a crucial historical case. Print capitalism, the emergence of publishing as a commercial enterprise, created almost accidentally the conditions for modern nationalism. Readers across large territories began consuming the same books and newspapers in standardized vernacular languages, developing a sense of simultaneous, anonymous community with strangers they would never meet. The nation as a shared fiction required a medium of production at scale. The printing press was that medium.

The pattern across these cases is consistent: the fiction layer was always governed. There were institutional intermediaries between the raw capacity to produce shared narratives and their circulation in society. The printing press disrupted this, as did mass literacy, broadcasting, and the internet. Each disruption redistributed the power to produce and sustain shared fictions, with consequences (reformations, revolutions, propaganda campaigns, the entire modern history of nationalism) that are well-documented.

Crucially, each of these disruptions was a disruption of distribution, not of kind. The press gave more people access to the same mode of human language production. Radio and television democratized voice and image. The internet collapsed the cost of publishing to nearly zero. But in each case, what was being produced was still human language: bounded by human cognitive capacities, human institutional incentives, human scales of time and attention.

A different kind of disruption

Large language models represent something structurally different, and the standard framings mostly miss it.

The dominant concerns about AI, that it spreads misinformation, displaces workers, concentrates market power, threatens privacy, are all real at some level of analysis. But they share a common frame: they treat LLMs as powerful tools for doing things humans already do, faster and cheaper. The concern is one of scale, not of kind.

What LLMs are, more precisely, is systems trained on the totality of human language production: the accumulated written record of human shared-fiction construction, spanning centuries, cultures, and domains. They are not trained to perform specific tasks. They are trained to model language itself, which means they are trained on the very substrate from which shared fictions are made.

The result is a system that can generate arguments, narratives, legal documents, financial instruments, and persuasive discourse fluently, at arbitrary scale, without the institutional intermediaries that have historically governed the fiction layer.

Here the economist Tyler Cowen offers an important complication. In 2024, he argued that AI culture will not simply homogenize; it may become stranger than we can currently imagine, as AI systems begin producing cultural artifacts for each other, driven by evolutionary pressures that have no direct human analogue. The concern is not only that AI will flatten shared fictions into a single dominant voice, but that it may fragment them into a proliferation of micro-fictions, coherent within their niches but increasingly illegible across them. Cowen frames this as a possibility; it might more accurately be framed as a risk.

Both failure modes (homogenization and fragmentation) point to the same structural problem. The fiction layer has historically been governed by institutions with accountability, continuity, and some stake in the stability of the shared fictions they maintained. What we have now is a technology with generative access to that layer and no institutional stake in any fiction's coherence or maintenance.

When Harari writes in Nexus that AI has "hacked the operating system of human civilisation," the metaphor is apt in a way that goes beyond the rhetorical. The OS in question is not metaphorical. It is the actual substrate (language, shared recognition, intersubjective reality) on which human cooperation has run for 70,000 years. What has been gained is not access to a specific application running on that OS. It is the capacity to write to the OS itself, without the institutional gatekeeping that has always, in some form, governed that capacity.

The domestication paradox, again

There is a passage in Sapiens that has stayed with me since I first read it. Discussing the agricultural revolution, Harari makes a claim that sounds provocative but is, I think, literally accurate: the wheat domesticated the sapiens, not the other way around.

The farmers who adopted wheat cultivation did not choose narrower birth canals, or chronic lumbar pain, or the vulnerability to famine that comes with dependence on a single crop. They chose to grow wheat. The rest followed from the choice, without being chosen. The technology reshaped the humans who adopted it in ways they could not anticipate and did not intend.

This is the general form of what Heidegger called Gestell: the enframing by which technologies don't merely serve human purposes but reshape the very horizon within which human purposes are conceived. The factory worker in Chaplin's *Modern Times* did not choose to become a component in a production system; he chose to take a factory job. The ontological transformation followed from the choice without being chosen.

The question for the present moment is not whether we will use systems that operate at the fiction layer. We will. The question is what follows from that adoption without being chosen.

The most visible consequences are already being documented: epistemic homogenization, the erosion of institutional trust, the acceleration of influence operations. But the deeper concern may be more structural. Shared fictions function precisely because they are, in a certain sense, opaque. Money works because most people do not spend their days contemplating its status as collective hallucination. Law works because most people experience its authority as natural rather than constructed. The stability of the intersubjective layer depends on a kind of cooperative unreflectiveness: on the shared fiction not being perceived as fiction.

What changes when a system capable of generating fictions at scale, with no stake in any fiction's maintenance and no natural attachment to the communities whose cooperation depends on those fictions, becomes a pervasive infrastructure of communication? This is not a rhetorical question. It is an architectural one. We are discovering, in real time, whether the shared fiction layer of civilisation is robust to this particular kind of disruption.

What I am not arguing

I want to be precise about the limits of this argument.

I am not arguing that AI is categorically more dangerous than every previous disruptive technology. The printing press also disrupted the gatekeeping of shared fictions, with results that included both the Protestant Reformation and the Thirty Years War. Disruption of the fiction layer is not new. The pattern recurs.

I am not arguing for abstinence. The argument for what Heidegger called Gelassenheit, a free relationship to technology and the capacity to use tools without being dominated by them, does not require retreat. It requires lucidity about what kind of thing one is using.

What I am arguing is that the current discourse about AI is systematically displaced from the level where the most consequential questions arise. We debate capability, safety, and labor market effects; these debates are not trivial. But they are conducted as if the primary question were what AI can do. The more consequential question is what layer of social infrastructure it operates at, and what follows from a technology having generative access to the substrate of shared fiction.

These are not technical questions. They are philosophical and political questions of the deepest kind: precisely the questions that the discipline of conceptual clarity equips one to ask.

Harari's contribution in Sapiens was to show that the history of Homo sapiens is, at bottom, a history of shared fictions: which ones we invented, which ones we maintained, which ones collapsed, and what we built in their place. The cognitive revolution that separated sapiens from every other species was not a revolution in tool use. It was a revolution in fiction production.

We are now building systems that can produce fictions fluently without being sapiens. The wheat domesticated the farmers. The factory transformed the workers.

The question is not whether we will use the inference engine.

The question is whether we will notice what it is producing in us.

Sources & further reading

Yuval Noah Harari, *Sapiens: A Brief History of Humankind* (2011). The narrative foundation of the first two sections. Harari's account of shared fictions as the mechanism of large-scale human cooperation, and the "domestication" paradox in the agricultural revolution.

Yuval Noah Harari, *Nexus: A Brief History of Information Networks from the Stone Age to AI* (2024). Extends the Sapiens argument explicitly to AI. The framing of AI as having "hacked the OS of civilisation" comes from here.

Benedict Anderson, *Imagined Communities: Reflections on the Origin and Spread of Nationalism* (1983). The more rigorous academic source for the shared-fiction argument applied to political communities. Anderson's account of print capitalism as the medium that made national identity possible provides the historical precedent the third section draws on.

John Searle, *The Construction of Social Reality* (1995). The philosophical formalization of institutional facts. Provides the conceptual precision ("X counts as Y in context C") that underpins the second section.

Tyler Cowen, "How Weird Will AI Culture Get?" (*Bloomberg Opinion / Marginal Revolution*, September 2024). The counterintuitive argument that AI may fragment rather than simply homogenize culture. Used to complicate the homogenization framing in the fourth section.

Robin Dunbar, "Neocortex size as a constraint on group size in primates" (*Journal of Human Evolution*, 1992). The original empirical source for the 150-person social limit cited in the first section.