Traffic to individual help articles is down (along with traffic to just about all pages). Quietly, consistently, over time. Meanwhile, usage of AI assistants trained on all this content has increased substantially. People are getting their questions answered, but they’re never reading the docs.

Yet… The docs are working.


The First Reader

The foundational skill of technical writing—the thing that separates it from documentation-by-accident—is the ability to imagine a specific person at a specific moment of confusion. Not just “users” in aggregate. A real person—we’ll call her Sarah: It’s 11pm, she’s trying to configure something before a deadline, and it’s not working. You write for her. You think about what she already knows, where she’s likely to get stuck, what she most needs to hear first. The structure, the word choice, the examples—all of it is an act of imagining a real person’s experience and trying to improve it.

A woman at a laptop late at night, focused on a glowing screen in a dimly lit room.
Sarah, hypothetically, at 11pm.

That’s the whole craft. Everything else—the style guides, the information architecture, the structured content strategy—is either downstream of the ability to hold the reader in your mind or meant to support the outcome.

And here we are in 2026, where I’m increasingly writing for an unliving intermediary rather than a very-much-alive end-user.


The llms.txt… movement? (I don’t know, that’s probably giving it too much authority)… is a proposed standard for providing structured, machine-readable versions of documentation so LLMs can parse them more effectively. It is, at its core, a reasonable response to a real problem. In many workflows, AI agents are the first reader of documentation. They ingest it, synthesize it, and serve back a version of it to the human downstream. If the structure of your docs makes it hard for the model to reason about your product, the human gets a worse answer. So you understandably you would want to optimize to that process.

I don’t have a fundamental objection to that. It follows logically from how the tools are being used.

But I do wonder about the order of things. The machine reads first, then serves the human. That’s a considerably different relationship with my audience than the one I have had thus far in my career.


Writing for an Intermediary

When you write for an LLM, you’re writing for something that will extract what it needs and discard the rest. The careful paragraph I labored over—the one that anticipated a reader’s skepticism and got ahead of it—is, to the model, just tokens. Maybe relevant, maybe not, depending on the query. You structure content. You add metadata. You think about what a model needs in order to reason correctly. These are legitimate and important craft decisions. But they’re not the same decisions we used to make for humans.

Good documentation still produces better AI responses. The human is still at the end of the chain, still receiving an answer that is, in some meaningful way, shaped by what I wrote. The craft still matters.

But it matters at a remove now. The human isn’t reading the thing I wrote. They’re reading the machine’s interpretation of the thing I wrote.

I’ve spent almost fifteen years trying to close the distance between a writer and a confused reader. Now there’s a machine between us.


I don’t think this is all bad, necessarily. Some people will end up with better answers, with fewer clicks, this way—the model doesn’t panic, doesn’t skim, can synthesize from twenty sources without losing the thread. For the user—the true audience—maybe this is better.

But there’s a little something lost in this, as we all adapt our way past it: technical writing was always a form of care. The act of imagining a specific person struggling with something, and making something that helped—that was the point. Not the deliverable. Not the content asset. The person.

If that person is increasingly going to encounter my work through a layer of LLM-mediated synthesis, I don’t want to forget that the user is still the point—still the audience. There are times when practices get preserved long after the reason for them disappears, but there is good reason to remember that our first priority should be the end-user, even if now LLMs are our first readers.

We just have to remember that we are still writing for Sarah. The question is whether Sarah will ever read this or whether the LLM she uses to find her answer will decide it matters.