Writing

Ideas about AI, augmentation, and the infrastructure gap.

The Replacement Story Is Too Small

Most public conversation about AI imagines a simple substitution. AI takes the job. AI writes the thing. AI answers the question. In this story, the future is automated, and humans are either freed or discarded depending on who’s telling it.

But human society is not a pile of tasks. It is roles, trust, rituals, responsibility, taste, care, memory, institutions, and weird accumulated habits that exist because humans are social and mortal and inconsistent.

So the more realistic future is not disappearance. It is mutation.

AI does not replace life. It adds a new social layer to it, and existing human roles mutate to serve that layer. The future is probably full of hybrid roles and hybrid teams, in workplaces, households, and the way we think about identity itself. The question is not what AI will do instead of us. It is what new forms of care, judgment, maintenance, consent, and meaning become necessary when intelligence is ambient.


We Have Done This Before

Every major cognitive technology triggered the same fear: that it would replace something essential about being human. The pattern that actually played out was messier and more interesting than simple replacement.

Photography did not kill painting, but it disrupted the economics of commercial portraiture. What survived and flourished was the part that was never about accurate representation: expression, abstraction, interpretation. Computing machines did not kill mathematics. They eliminated routine arithmetic labor. What remained was structure, proof, abstraction. Spreadsheets did not kill accounting. They displaced bookkeepers. What survived was analytical judgment.

The pattern is consistent but not painless. Automation removes volume from mechanical work. It forces professions to mutate. It makes judgment more visible and more valued, for the people and institutions that survive the transition.


Why This Pattern Holds

There is a structural reason that human roles mutate rather than disappear, and it has nothing to do with nostalgia or protectionism.

Decisions that affect human lives require someone with stakes in the outcome.

An agent can process every data point in a medical record. It cannot weigh that data against the fact that you just lost your mother and you are barely sleeping and right now is not the time to add another medication to your life. That is not a data problem. It is a values problem. Values require a perspective: a lived experience, a position in the world, something to lose.

This is not a limitation of current technology that will be solved with more compute. It is a requirement of legitimacy. A decision that no person stands behind has no moral weight, regardless of its auditability.


The Infrastructure Gap

The connectivity layer is being built aggressively. The Linux Foundation ecosystem is rapidly standardizing agent infrastructure. The Agentic AI Foundation formed around Anthropic’s Model Context Protocol, Block’s goose, and OpenAI’s AGENTS.md, while Google’s Agent2Agent protocol is establishing agent-to-agent communication as a separate project.

But connectivity is not cognition. The ability to pass messages is not the same as the ability to think together. There is a fundamental gap between agents connecting and agents sharing understanding. Between the nervous system and the judgment layer.

Building a shared-memory system for human-agent teams (I have been working on one called Jurati) surfaces governance questions that no current infrastructure answers. What should be remembered? What should be forgotten? Who authorized this knowledge? What is contested versus settled?

If the shared cognition layer gets built as proprietary product by a few companies, the way social media captured social graphs before open alternatives could form, those companies will own the infrastructure of how groups think. If it gets built as open protocol, the way HTTP and email were, then it becomes public infrastructure that anyone can build on and no one controls. That window is open now, but it will not stay open long.


The Access Question

If augmentation is the winning strategy, then the access question becomes urgent. Not just morally, but economically. Every previous cognitive technology created a divide. If agent augmentation follows that pattern (sophisticated agents for the wealthy, crude ones for everyone else) the resulting inequality would be qualitatively different from anything we have seen. Not a gap in comfort. A gap in thinking capacity.

What actually solves this is sufficiency. A floor high enough that no one’s potential is artificially capped by access. Every person whose potential is currently capped by lack of access to good education, mentorship, medical guidance, legal help, or financial advice represents unrealized capability. A billion people with adequate cognitive partnership produces more innovation, more economic activity, and more problem-solving than a million people with premium access. Raising the floor raises everything.


What Kind of Future

The future will still be life. Messy, relational, negotiated. But now some of the relationships will involve systems that remember, suggest, act, and shape the room. The future may not ask us to become less human. It may ask us to become responsible for more kinds of minds, more kinds of memory, and more kinds of collaboration.

We have absorbed major cognitive transitions before: writing, printing, electricity, the internet. Each time, we invented new professions, new norms, new institutions. Each time, we did it too slowly and unevenly, and people suffered in the gap. This time, we can see it coming. That is genuinely new. What we do with that visibility is a choice, and probably the most consequential one of the next decade.