There’s a certain kind of person that complains that AI tools are “basically useless.” Every response is generic. Nothing is tailored to what you actually need. The outputs look fine but were somehow always slightly wrong—like a draft written by someone who’d been briefed in a hallway and was trying to look confident about it.

They aren’t wrong to point out that the outputs are bad… but they are oblivious to why.


There’s a version of this complaint I see on LinkedIn, usually from people with some real organizational authority behind them. They’ve been handed enterprise subscriptions, attended the internal AI rollout, watched a demo where someone prompted a model into producing something impressive—and then sat down to try it themselves and gotten mush.

The conclusion most of them reach is that AI is overhyped, or that it works for certain narrow use cases but not theirs, or that the technology isn’t quite there yet. These are comforting conclusions. They put the problem somewhere external.

Here’s the less comfortable read: the quality of what AI produces for you is roughly a function of the quality of how you communicate. Not your vocabulary. Not your technical sophistication. Your ability to say, clearly and specifically, what you want and why.

If that skill is underdeveloped, the AI will show you.


The Questions You’re Not Asking

When someone tells me AI doesn’t work for them, I usually have three questions.

Are you using it for things it’s actually good at? Is your desired outcome clear to anyone reading your prompt—or just to you? And are you treating the back-and-forth like a process, or are you expecting a finished product on the first try?

Here’s what I find interesting about those questions: they’re exactly the questions a manager might ask themself about why a direct report is struggling.

Is this person in the right role for what we’re asking? Have I made the goal clear, or did I assume they’d just figure it out? Am I giving them room to ask for more information, or am I walking away and waiting to be disappointed?

If you’ve spent any time managing people, that pattern should be familiar. Unclear direction produces mediocre output. That’s not some new insight about AI—it’s just a basic tenet of communication.


The Mirror Problem

The reflection from a car's side-mirror, showing the road behind.
AI outputs are a reflection. What does it say if you don't like what you see?

What makes AI different from a direct report—and maybe a little more concerning—is that it won’t tell you when you’ve been unclear. It’ll just give you something. It will do its absolute best to produce a confident-looking response to your underspecified request, and that response will be approximately as useful as the request deserved.

Your direct reports have probably been doing a version of this for years. Not out of malice, but the same reasonable professional instinct to not walk into their manager’s office and say “I genuinely have no idea what you want me to build.” They interpret. They fill in the gaps. They make a judgment call about what you probably meant and they do that thing, and then they bring it back, and you say it’s not quite right, and neither of you is sure why.

AI compresses that feedback loop into seconds. Which makes the communication failure more obvious.


Where Technical Writers Have Been Quietly Living

This might sound a little self-serving, but I think it’s demonstrably true: people who’ve worked in technical writing are, as a group, well-suited for this moment. (Ironically so, given that so many technical writing positions are being cut.)

Technical writing is, at its core, the discipline of communicating with specificity. You spend years learning to ask: what does this person actually need to know? What’s the outcome they’re trying to achieve? What assumptions am I making that they don’t share? How do I say this in a way that produces the behavior I want rather than the behavior I’m accidentally implying?

Those aren’t documentation skills. They’re thinking and learning skills—meta-cognition skills—that happen to produce good documentation. And they are exactly the skills that give LLMs the right kind of context for creating the outputs you are actually looking for.

When I write a prompt, I’m not guessing at what the model needs. I’ve spent a long time learning to anticipate the gap between what I think I said and what a reader—or a model—will actually receive. I know to specify the audience. I know to describe the desired output, not just the subject. I know to give context about why I need a thing, because that changes what the thing should look like.

Those habits didn’t come from prompting courses or AI literacy workshops. They came from years of writing instructions for human beings who couldn’t ask follow-up questions. “Specificity is the soul of narrative,” one might say.


The Opportunity in the Bad News

So here’s the uncomfortable part, addressed directly to the people I mentioned at the top: if AI isn’t working for you, the gap probably isn’t in the technology. It’s in the habits around communicating your intent—habits that, until now, have been masked by the patience in, professional grace of, and—if you are in management—maybe fear of blowback from the people you work with.

The good news is that fixing it isn’t complicated. It just requires doing the same thing for AI that you should be doing for your team: decide what the finished product should actually look like before you ask anyone to make it. Give context, not just a request. Be specific enough that someone without your mental model can do the right thing. And stay engaged through the process instead of dropping a vague ask into the void and hoping something useful comes out.

These aren’t AI skills. They’re management skills. Communication skills. The kind technical writers develop as standard operating procedure.

The side effect—better AI results—is almost beside the point.


Years of working in tech with people who aren’t trained to think in specificity or in how to fine-tune content to only those things that matter has meant that there are many times when I leave a meeting thinking: “I have no idea what they want me to do.” It has required learning how to extract useful information from someone and turn it into instructions someone else could follow.

I couldn’t know that was a skill that would matter so much for everyone now.

So if you’re wondering where to start with getting better results from your AI prompts, talk to your local technical communication expert. They probably can help you improve that—and help you improve how you communicate with your human coworkers, too.

Need clearer communication—human or AI?

I help teams write with more specificity and intention. Take a look at what I offer.

See my services