The story of AI has been about machines understanding us. But something else is happening: we are changing how we write.
For years, the dominant narrative about artificial intelligence has moved in one direction. Models are learning to understand us. They parse our grammar, infer our intentions, tolerate our ambiguities. The trajectory seems clear: machines are becoming fluent in human.
But this framing, however satisfying, captures only half of what is actually occurring. A quieter transformation is unfolding in the other direction, one that has received remarkably little attention. Humans are learning to speak machine, not by studying programming languages, but by subtly, almost unconsciously, reshaping the way we write.
Watch how people work with large language models today, and you will notice something striking. Marketers draft briefs in Markdown rather than flowing prose. Product managers organize their thoughts into bullet hierarchies with explicit constraints. Designers describe user flows step by step, as if writing stage directions. Engineers explain complex systems in plain language, but with a structure that would feel foreign in ordinary conversation.
Across disciplines, a convergence is taking place. People are gravitating toward clear sectioning, explicit intent, enumerated constraints, concrete examples over abstract descriptions, and careful separation of what they want from how they want it done. These are not programming languages. But they are not quite natural language either. They occupy a new space in between: a kind of structured vernacular optimized for probabilistic interpretation.
This is not a future in which everyone learns Python. What is emerging instead is something more subtle: a set of lightweight conventions embedded in everyday writing. These conventions are human-readable and easy to produce, informal enough to remain flexible yet formal enough to serve as guardrails when a model interprets them. The result is a new kind of literacy: language that remains natural to us while being structured enough for machines to execute with consistency.
We are not, in other words, formalizing language. We are conditioning it.
This shift is often mistaken for "prompt engineering." It is not. Prompting optimizes individual interactions: a better question here, a cleverer framing there. Conventions optimize systems of interaction. They are not tricks for extracting better outputs; they are shared patterns that make reliable collaboration possible in the first place. This is not a tooling trend. It is a literacy shift.. A new human capability emerging in response to a new kind of machine.
Language as Interface
Traditionally, software demanded rigid inputs. If you wanted predictable behavior from a computer, you had to speak its language: strict syntax, explicit types, carefully structured schemas. Humans adapted themselves to machines because machines could not adapt to us.
Large language models invert this relationship. Instead of writing code that machines execute, we write intent that machines interpret. The model becomes a kind of compiler: translating soft, structured language into whatever output the moment requires: a user interface, an API call, a SQL query, a block of code, a decision. The stability of the output no longer depends solely on the sophistication of the model. It depends on how well the input encodes intent, constraints, and structure. The guardrails are no longer in the system alone. They live in the language itself.
Here is the conceptual leap: we are beginning to write language that functions as a portable runtime specification. Text that is not bound to a specific interface, framework, or medium; yet carries enough structure to be faithfully translated across all of them. A single source of intent, capable of multiple executions. The surface syntax may evolve, Markdown today, something else tomorrow, but the underlying conventions persist.
we are beginning to write language that functions as a portable runtime specification.
But isn't this just a return to formal language, like programming? Or perhaps: isn't this simply good writing by another name?
Neither characterization quite captures what is happening. Programming languages are rigid by design; they sacrifice flexibility for precision, and they require explicit translation into machine-executable form. The conventions emerging around LLMs are different. They remain natural language, readable and writable by anyone, yet they carry structural cues that constrain interpretation without eliminating ambiguity entirely. They are soft where programming is hard.
And while these patterns do share something with good writing, the aims diverge. Traditional clarity serves human comprehension. LLM-native conventions serve machine interpretation. The overlap is real, but partial. You can write beautifully and still produce text that a model struggles to execute reliably. You can write plainly, even awkwardly, and achieve remarkable consistency if your structure is sound. The craft is related to good writing, but it is not reducible to it.
Why This Matters
Most conversations about artificial intelligence fixate on model capability: the next benchmark, the next breakthrough, the next leap in reasoning or generation. But this co-evolution between humans and machines suggests something deeper. Reliability will increasingly come from how we write, not just which model we use. The most effective users of these systems are not necessarily those with the cleverest prompts but those with the clearest internal representations of what they want. The future of human-AI collaboration may depend less on raw intelligence than on better interfaces—and language is the first and most fundamental interface of all.
This is not about control, not really. It is about alignment through expression.
The shift is already visible in the tools people are building. Consider Claude's "Skills": they are not functions or APIs in the traditional sense, but named, reusable intentions written in natural language. Their power lies not in formal definition but in consistent phrasing. Users are not programming Claude; they are conditioning it through convention. Cursor's "Commands" follow the same pattern: neither scripts nor macros, but repeatable linguistic patterns that produce reliable behavior. A command is not executed so much as interpreted; yet it remains stable enough to function as an interface.
Even more revealing is the emergence of files like agents.md. These documents do not configure software in any traditional sense. They describe roles, constraints, and expectations in plain language, and yet they function as runtime environments for autonomous systems. The guardrails are enforced not by syntax but by narrative structure.
None of these tools introduce new programming languages. They introduce shared ways of writing. Each represents an independent convergence on the same insight: reliability in human-machine systems increasingly emerges from how humans express intent, not from building stricter machines.
The next breakthrough, I suspect, will not be a new model or a new programming language. It will be the emergence of shared, informal standards for what we might call LLM-native writing: patterns that will seem obvious in hindsight yet radically expand what can be executed reliably from text alone.
We have spent years teaching machines to understand us. Now, quietly, we are learning to write so that understanding becomes inevitable.