Last year a B2B SaaS company fired their freelance writer. The deliverables were generic, unfocused, off-voice. Three months later they replaced the writer with Claude, fed it the same one-paragraph Slack message they’d been sending the freelancer, and got back generic, unfocused, off-voice content.
The writer was never the problem. The brief was the problem. And swapping in a different executor didn’t fix it, because the input never changed.
This is happening everywhere right now. Marketing teams adopt AI writing tools, feed them the same vague instructions they used to send freelancers, then conclude that AI “can’t write well” or “all sounds the same.” But the output quality was never a function of who (or what) was doing the writing. It was a function of the brief.
Prompting = Very precise briefing
Here’s what makes this interesting: if you can write a good content brief for a freelance writer, you already have the core skill for prompting AI effectively. The structure is nearly identical. A strong freelancer brief defines the audience, the angle, the voice, the desired outcome, and what not to do. A strong AI prompt does exactly the same thing. The vocabulary is different. The skill is the same.
The people producing great AI-assisted content aren’t prompt engineers. They’re people who were already good at articulating what they wanted before a single word got written.
And the gap between a lazy brief and a precise one is more visible with AI than it ever was with human writers. A good freelancer compensates for a bad brief — they fill in gaps with their own judgment, ask clarifying questions, make educated guesses about voice and angle. AI doesn’t do that. It takes the brief at face value and executes. Which means every weakness in your brief shows up in the output, immediately and without mercy. AI doesn’t quietly fix your thinking for you. It mirrors it back.
Let AI critique your prompt
That mirror is actually useful. One of the least discussed moves in an AI writing workflow is asking the model to interrogate the brief before executing it. Something like:
Here's my content brief. Before you write anything, identify ten ways the brief could be clearer, sharper, or more original.
This turns the AI into an editor of your thinking, not just an executor of it. It’s the equivalent of a senior freelancer pushing back on a weak brief. Except most freelancers don’t push back, because they don’t want to lose the client.
Obvious AI writing
The other persistent critique: that AI writes in one voice, that unmistakable ChatGPT tone. But that too is less a limitation of the technology than a reflection of how most people use it. Prompting with “write a blog post about content marketing” produces generic output for the same reason that briefing a freelancer with “write something about content marketing” produces generic output. The defaults are bland because the instructions are bland.
AI is a stylistic chameleon. The flexibility is there. But it requires the same thing good creative direction always required: specificity. Not just “write in a conversational tone” — that’s the kind of direction that produces the worst of LinkedIn. Specificity means defining what the voice isn’t as much as what it is. It means giving examples. It means naming the precise qualities that make a piece of writing sound like it came from a particular mind rather than from the median of all minds.
Streamlining the Workflow: Claude → WordPress MCP
The tooling is catching up to this reality. WordPress 6.9 shipped with full MCP support, which means Claude can connect directly to a WordPress instance — publishing, editing, deleting posts without a human copying and pasting in between. The workflow from brief to published article is now a single pipeline. The bottleneck isn’t execution anymore. It hasn’t been for a while. The bottleneck is the quality of the thinking that happens before execution begins.
Nobody blames a calculator when they get the wrong answer. They check the formula they entered. The same logic applies here, but somehow the conversation around AI writing skipped that step entirely. The industry jumped straight to debating whether AI can write, past the more useful question of whether the people directing it know how to articulate what they want.
Most of them do. They just haven’t realized that the skill transfers.