For the past two decades, organizations have pursued a pretty singular vision for knowledge management: centralization. Followed by its close cousin, aggregation. Now, AI might be forcing a reckoning, and the cultural shift needed might be more seismic than the technical one.
The vision for centralization was elegant: create a single source of truth. Document every process. Establish taxonomies. Apply governance. Control the flow of information. This approach avoided duplication, inconsistency, expired information; all the things that make your average content person shudder. But it came at a cost: it omitted everything that wasn't formal enough, structured enough, or sanctioned enough to make it into the official system.
Today, LLMs might be making it viable to consider an alternate vision. Not because the old approach was wrong, but because what was omitted might actually be valuable —the informal, messy, unstructured knowledge living in Slack messages, support tickets, community forums, and internal discussions. And there are signals in the market that companies are finally realizing this too.
The Different Eras of Knowledge Management
Era 1: Centralization
When I was working as a technical writer in the mid-2000s, our team had a dedicated editor. Some large enterprises had entire editorial departments. The editor’s job was to review every single piece of technical product content that our team wrote before it went public to ensure it matched both the company branding and style guide (which was based on the Chicago Manual of Style.) Later the company moved to a writing standard called DITA, which further added to the editorial responsibility; not only did every piece of content have to follow the same grammatical rules and style, now it had to be written in exactly the same pattern everywhere.
This wasn't unusual. This was standard practice. Organizations invested significant resources in gatekeeping content because consistency and control were the primary value proposition. The logic was simple: if we can standardize, we can scale. If we control the message, we maintain brand integrity and try to ensure nothing gets published that shouldn’t.
“Phew, good thing that article with the inconsistent bullet styles didn’t get out the door.”
That is unfair, but I couldn’t resist. There is a compelling reason why companies pursued this strategy. Organizations built knowledge management systems around this core assumption: consolidation is good. Create wikis. Write documentation. Establish single sources of truth. Version control them. Gate them behind guardrails, writing standards and editorial process.
The benefits were real. Processes were inconsistent. Information was scattered. Centralization brought order. And, crucially, everyone knew who owned what content, and who owned what system. So a hierarchy of content ownership was established which cemented this order.

But this level of gatekeeping had a cost. It required dedicated resources. It also typically involved a dedicated software and CMS, often customized to add to complexity. And it slowed everything down. And it created a culture where "getting approval" became very important, sometimes more important than getting the information to the end user in a timely manner.
But it also brought omission. Anything too informal, too specific, too controversial, or too difficult to articulate didn't make it in, unless the writer themselves was willing to go the extra mile and burrow into systems to find answers and argue for inclusion. And there were passionate writers who did do this. But more often than not, the answers were not visible to them; they were stuck in a customer support conversation where the team discovered a workaround for an edge case, or Slack thread where an engineer explained why a technical decision was made. So they were relying on answers that were given to them during the content review.
These omissions were no less valuable than official documentation. They showed reality: constraints, context, nuance. But they weren’t surfaced by the centralization model.
Era 2: Aggregation
As organizations accumulated more tools, the question shifted: how do we bring multiple sources together? Knowledge management systems tried to aggregate. And this was a smart solution. Pull from Jira. Pull from Confluence. Pull from Github. Pull from ticketing systems. Combine them. And then filter them and tidy them up.
I recall working in a company that manually sifted through hundreds of Jira tickets and combined them into monthly release notes, based on a list of tickets emailed to them from the engineering team. It took them hours each month to do this. And when I suggested we just aggregate them from the JIRA boards and publish, I was stared at, aghast. What if a ticket got in that wasn’t supposed to? Lets just compare the difference (diff) of the original list against the published one, I suggested. And they tried it. So simple. And such a time-saver.
But aggregation still requires curation—filtering, standardizing, re-writing. So, yes, we expand the value proposition in the content, and add efficiencies to the process, but the gatekeeping is still the same in terms of publishing.

Era 3: Decentralization (Soon?)
LLMs may change the equation entirely. They will be able to process unstructured content at scale. Synthesize across conflicting sources. Extract context from messy conversations. They can understand that a Slack thread contains as much knowledge as a polished document, just in a different form.
This makes informal content economically viable to use. Not because LLMs prefer unstructured data, but because the cost of processing it is significantly cheaper than it would be if humans had to do it.
More importantly: companies don't have to choose anymore. They don't have to decide what's valuable enough to document. And they will no longer need to worry about bullet points that don’t match, or other style formats.
They can expose everything to the LLM, official docs, Slack archives, support transcripts, community forum discussions, GitHub issue threads, and let the LLM synthesize meaning at the point of consumption.
That doesn’t mean there aren't guardrails. They are just applied differently.
This model introduces a different kind of quality risk. Instead of worrying about whether the right content got published, organizations will need to worry about whether the LLM synthesized correctly across sources. The guardrails shift from editorial review before publication to evaluation of synthesis after the fact.
Perhaps the user doesn't get a pre-packaged answer from official docs. They get a synthesized answer drawn from dozens of sources: official guidance and how users actually solve the problem and what tradeoffs exist in the real world.
How users were getting their information pre-LLMs
To understand what's at stake, it’s useful to consider what was already happening pre-LLMs. Take this example: A user has a problem with your SaaS product. They need to do X, but with constraint Y. Where do they go?
Official documentation (has information about X, but doesn't address their constraint)
Stack Overflow (someone asked this 3 years ago, answers are partially irrelevant)
GitHub issues (half-answered discussions, some marked as "working as designed")
Community Discord (threads with 4 different workarounds for 4 different versions)
Reddit (someone posted a solution that worked for them but mentions "probably not the right way")
A blog post from 2021 (Google result, partially relevant)
Support ticket via search (doesn't directly help but shows the company is aware of it)
As a user, we probably synthesize across all of these sources and try to figure out the best way forward. And the aggregate picture—albeit messy, contradictory, context-dependent—is often more useful than any "official" answer might be.
The rise in popularity of forums in this context is particularly important. Forums showed us something that official documentation couldn't: users didn't just want answers — they wanted context, real-world experience, and the ability to explore their specific situation. With Reddit, Stack Overflow, or community Discord, you could ask a clarifying question, describe your specific constraint, get a tailored response, debate the tradeoffs, see how others in your situation handled the problem and learn not just the what but the why.
Official documentation's fundamental limitation was that it was a static broadcast: here's what we decided you should know right now. If your situation didn't fit the documented scenario, you were stuck.
LLMs pick up part of that thread, but not all of it. They can deliver on context and real-world breadth at a scale forums never could — synthesizing across hundreds of conversations, tickets, and threads to surface the kind of practical, experience-based knowledge that forums made valuable. But the interactivity is different. Instead of a back-and-forth with other humans who've lived the problem, the user gets a conversation with a system that's absorbed all those human conversations. That's not the same thing — but for most use cases, it may be sufficient. And unlike a forum thread from three years ago, the synthesis is happening in real time, against current sources.
Market Evidence: The Extractive Knowledge Movement
It's useful to note that the market is already signaling that organizations recognize this problem: valuable knowledge is trapped in informal channels. There are plenty of companies building products to unlock this value.
Glean connects to more than 100 enterprise apps like Slack, Confluence, Jira, Salesforce, Zendesk, and uses AI to synthesize knowledge trapped across all of them. Tettra builds AI-powered knowledge bases directly from Slack conversations, using its bot to answer questions from knowledge that would otherwise stay buried in threads. Limitless (formerly Rewind AI) captures context from meetings and conversations, preserving the kind of institutional knowledge that typically evaporates the moment a call ends.
Meanwhile, the major enterprise vendors are pivoting hard. Slack, Confluence, and Jira now all market AI that synthesizes across content. Zendesk, Intercom, and Freshdesk are adding AI that learns from ticket conversations. And help centers are increasingly surfacing community discussions and real customer problems alongside official solutions.
If companies are building and paying for products to extract knowledge from Slack, support tickets, and community discussions, it's because they've realized the informal channels contain as much, if not more, valuable knowledge than official systems, that knowledge is currently inaccessible without manual synthesis, and the economic value of making it accessible justifies product investment. This is happening because organizations finally realized that the centralization approach omitted some of the most valuable stuff.
The Slow Death of Docs for Humans?
So let's go back to this world of decentralized content - how might it pan out? What follows is just a prediction on my part. I want to be upfront about the uncertainty. But based on the patterns above, the omission problem, the market investment in extractive knowledge, the way users were already synthesizing across informal sources long before LLMs arrived, here's where I think all this might land.
A hybrid model will emerge. Companies will continue managing and publishing structured doc sets, partly out of habit, partly because contractually they'll still be obliged to produce 'product docs' for external products. But alongside those, they'll start building content pipelines designed specifically for LLM consumption i.e. content that isn’t meant for human consumption.

What will be interesting is how that LLM-specific content gets imagined, and practically, how it gets produced and maintained. From a process perspective, the gatekeeping structure of the past will no longer be needed. And that will lead to a situation where contributions will be based on skills, expertise, and domain knowledge, rather than role.
The shift in audience will be gradual, but still decisive. Fewer human users will consume those source docs directly. More AI agents will. And inevitably, management will start asking the obvious, but inevitable, question: why are we generating and paying for all these pretty-looking, well-organized, centralized doc sets when no humans are consuming them?
The humans, meanwhile, will be interacting with the AI agents layers away from these documents. Via a chatbot. Audio. UI. Who knows what else.

So organizations will move more and more towards decentralization. You might look at this and think: hang on, isn’t this just another form of centralization? You've replaced human editors with an LLM and called it decentralized? But that misses what's actually changed! In the old model, the system was transparent. Everyone (with permission) could see the pipeline: who wrote it, who reviewed it, what got published and why. You could trace the lineage of any piece of content.
In the new model, the content sources are genuinely decentralized, and the orchestration itself may be distributed across multiple agents and layers. But I think the key difference isn't centralized versus decentralized; it’s transparent versus opaque. However the orchestration works, the logic of how sources get selected, weighted, and synthesized into an answer will be visible to a lot fewer people. The old gatekeeping was visible and human. The new gatekeeping is embedded in the orchestration and largely invisible.
So the shift organizations need to grapple with isn’t only technical. It's also the fact that most people won't be able to see how the sausage gets made anymore. That means that the people who do understand will become pretty valuable, but more on that later.
What would this new content model look like?
Firstly let's take a stab at what content would be needed in this type of decentralized model. The production process might split into these five streams (not all equal in terms of human effort).
Auto-generated content is the area where LLMs will probably shine the most. Prescriptive content like API specs. Schema docs from your actual schemas. CLI reference from your code comments. Release notes pulled from commit logs. Configuration reference built from config files. The stuff generated alongside code.
Unstructured content is high-value material that was intentionally created inside the organization but, in the old model, would have been deemed too unpolished to publish: internal demo videos, recorded walkthroughs, rough technical write-ups, audio explanations. The content is valid and correct — it just doesn't look the way the old publishing pipeline expected. Now, that doesn't matter. The LLM can draw on it regardless of format. In the future, some of this content may shift into the auto-generated space as tooling matures.
Naturally created content is different - it's knowledge that nobody set out to produce. It's a byproduct of people doing their jobs. Slack conversations. Support tickets. GitHub discussions. People writing blog posts about whatever they want to write about. Someone solves a problem, they mention it in Slack. A customer hits an issue, support captures it. That's all knowledge, just not in your 'official knowledge system.'
Curated by humans is where content professionals add the most value. Not by gatekeeping, but by designing the prompts that guide how the LLM thinks about synthesizing all this. By selectively enhancing auto-generated content with organizational context. By tagging things with the intelligence only humans have. By evaluating whether the synthesis is actually working.
Still probably needs human writing. Content like real-life use cases, white papers, tutorials and similar, because someone needs to think through 'how do people actually solve this problem?' The 'Why'. Decision docs. Constraint and tradeoff explainers; what you give up choosing option A versus B. And troubleshooting guides, though those might eventually auto-generate from all your support patterns and Slack debugging sessions.
As you can see, the result is a distributed system. Machines handle what's mechanical and prescriptive. People create knowledge naturally, without being asked. Humans add organizational intelligence through prompts, orchestration, architecture, evals. LLMs stitch it together.
More importantly, how would this new model function?
The uncomfortable truth is that it's likely much of the current content workload is going to disappear. In the centralized model, organizations built teams of technical writers, editors, content managers, and knowledge engineers. That structure made sense at the time; it solved real problems of consistency and control. But in a decentralized model, you simply need fewer people.
What remains is different work, and arguably, work that harder because it involves an entirely new skillset for most. Someone needs to design prompts that shape how the orchestration layers think. Someone needs to evaluate whether the synthesis is actually working — whether the answer that gets surfaced from a Slack thread, a support ticket, and an API doc is trustworthy. And someone needs to do that while operating inside a system that's far less transparent than the old editorial pipeline. You can't just trace the publishing workflow anymore. You have to interrogate an opaque process and make judgments about outputs you can't fully reverse-engineer.
So the remaining roles require deeper judgment, not less. But there’ll likely be fewer of them. Maybe one person does all three functions, maybe two or three split the work. The volume is orders of magnitude smaller because auto-generation handles reference docs, and because the LLM can draw on naturally created content and unpolished internal content, this eliminates much of the formal and structured writing.
The issue is though, that the other members of the team and in the organization, Developers, QA, Business Analysts and so on: they also will be vying for this work. So the lines delineating the roles and responsibilities are going to blur. In this type of environment, the people who really knuckle down and figure out how the sausage is made will probably fare better.
Why this Model will be Resisted (for Now)
So, for people who've built their careers in content and knowledge management, this isn't just a technology change — it's a threat to their livelihood. And that's a legitimate concern. Organizations spent two decades building these roles, establishing these career paths, and now those paths are potentially narrowing dramatically. The people in these roles didn't invent gatekeeping; organizations created role structures that depended on it. Of course they'll defend those roles. It would be strange if they didn't.
This is why the resistance to this type of decentralization model, I would expect it to be fierce. It's not cynical or irrational. It's people responding naturally to economic uncertainty about their future. And until organizations acknowledge this directly, until they have honest conversations about what happens to content teams, or the roles and responsibilities in product and technical teams, in a decentralized world, the cultural barriers will remain stronger than any technical barriers.
But cultural resistance is only one dimension of the problem. There are practical questions that any organization attempting this transition will have to face. How do you build trust in a system where the synthesis logic is opaque? Who is accountable when an LLM-generated answer is wrong? And how do you even detect that it's wrong? How do you convince subject matter experts to contribute knowledge informally when they've spent a decade being told that only sanctioned, reviewed content matters? And how do you manage the transition for the people whose roles are disappearing, not in the abstract, but concretely, in your organization, with your team?
Those are the questions I'll take up in Part 2.
