Marketing Baby

Your Best Content Already Exists. It’s Just Trapped.

Every growth engagement starts the same way. The brief says “we need more content.” The roadmap is full of net-new pages, net-new blog posts, net-new landing pages. The assumption is always that the content gap is a production problem.

Then you look at what the company already has.

Help documentation with detailed explanations of every feature, written by people who actually built the product. A developer portal with tutorials that walk through real implementation scenarios. Sales enablement decks that articulate positioning better than anything on the marketing site. Confluence pages where engineers documented architectural decisions that double as thought leadership. PDF whitepapers gated behind forms that nobody fills out anymore, full of original research that never made it into an indexable format.

The content exists. It’s just scattered across subdomains, buried in PDFs, orphaned behind logins, or sitting in internal tools that no search engine has ever touched.

The production trap

Most B2B SaaS companies dramatically overinvest in content creation relative to content surfacing. The content team is measured on output. The editorial calendar is organized around publication cadence. The entire system is optimized for making new things, not for finding and formalizing what’s already there.

This has always been a missed opportunity. But the rise of LLMs makes it a more urgent one.

Large language models don’t just crawl your marketing blog. They pull from help docs, developer documentation, community forums, and any publicly accessible page associated with your domain. The information that shapes how an LLM describes your product, explains your category, or compares you to competitors is just as likely to come from a support article as from a carefully crafted landing page.

If your best thinking lives in places you’ve never optimized, you’re letting the LLM piece together your narrative from scraps.

Context engineering, not content creation

Kevin Indig uses the term “context engineering” to describe this shift, and it’s the right frame. The work isn’t to generate more content. It’s to take the context that already exists inside the company and make it legible to the systems that now determine how your brand is understood.

That means something different than a content audit in the traditional sense. A traditional audit asks: what pages do we have, what keywords do they target, what’s the gap? Context engineering asks a broader question: where does institutional knowledge live, and how much of it is accessible to the outside world?

The answer, for most companies, is: very little.

Sales teams have objection-handling documents that articulate competitive differentiation better than anything on the website. Product teams have specs and decision logs that demonstrate deep expertise. Customer success teams have created resources that answer the exact questions prospects are searching for. None of it was created with SEO in mind, but most of it is more substantive than what the content team is producing from scratch on a two-week cycle.

Why this matters more now

In traditional search, the penalty for having great content in the wrong place was a ranking problem. The page existed but didn’t perform because it lacked internal links, lived on a subdomain with low authority, or was locked behind authentication.

With LLMs, the penalty is different. It’s a narrative problem. When an LLM constructs an answer about your product or your category, it synthesizes from whatever sources it can access. If your most authoritative, most detailed content is invisible, the LLM builds its understanding from thinner sources. Or worse, from what competitors and third parties have published about you.

This is why Gianluca Fiorelli describes LLMs as a reputation management surface. The model’s understanding of your brand is shaped by whatever context it can reach. If your own first-party expertise is locked in Confluence or gated behind a PDF, you’ve ceded that narrative to whoever else has written about you.

The Ahrefs experiment made this concrete: they created a completely fictional company and fed LLMs information about it. The models incorporated it without question. If fabricated context can shape LLM outputs, imagine what happens when your real, substantive context is simply absent from the model’s reach.

The practical shift

The work here isn’t glamorous. It looks like this:

Take the help center content that explains complex features and turn it into indexable, well-structured pages on your primary domain. Extract the core arguments from sales decks and formalize them as public-facing content. Ungate the whitepapers that stopped generating leads two years ago. Look at internal documentation and ask what’s here that demonstrates genuine expertise, without exposing anything sensitive.

Kevin Indig puts it bluntly: companies will spend years bringing this treasure chest from the bottom of the ocean to the surface. He’s right that the timeline is long. But he’s also right that the companies who start now are building a compounding advantage. Every piece of internal knowledge that gets formalized and made accessible strengthens the context layer that both search engines and LLMs use to understand who you are and what you know.

The gap between what most companies know and what they’ve made visible to the outside world is enormous. Closing that gap will do more for organic growth than any editorial calendar.

If you want to dig deeper into this, check out this conversation between Kevin Indig and Gianluca Fiorelli.

Leave a Comment