Nobody Knows Where Anything Is Documented
By Norbert Wlodarczyk
Your documentation isn’t missing. It’s scattered across seven tools in three formats, maintained by nobody, and trusted by even fewer.
The 2pm question
It’s 2pm on a Tuesday. An engineer needs to understand why the payment service uses a specific retry strategy. She knows this was discussed somewhere. She starts looking.
Confluence has a “Payment Service Architecture” page last updated 14 months ago. It mentions retries but not the current strategy. Google Docs has a design doc from the original implementation, shared with a team that’s since been reorganized. Slack has a thread from six months ago where someone asked this exact question and got a three-paragraph answer from a staff engineer - but Slack search returns 200 results for “payment retry” and she gives up after the first page. The GitHub README has a code comment that says “see design doc” with a broken link.
Forty minutes later, she messages the staff engineer directly. He answers in four minutes.
This isn’t a documentation problem. It’s a documentation distribution problem. The knowledge exists. It’s just not findable by someone who doesn’t already know where it is.
How documentation fragments
Nobody decides to scatter their documentation across seven platforms. It happens organically, one reasonable decision at a time.
The company starts with Confluence because that’s what enterprise teams use. Then product management adopts Notion because it’s faster for specs. Engineering starts putting technical decisions in GitHub PRs because that’s where the code review happens. Architecture diagrams land in Google Docs because Confluence’s drawing tools are painful. Quick answers accumulate in Slack because that’s where the questions are asked.
Within two years, a 100-person engineering org has knowledge distributed across five or six platforms, each with its own search, its own permissions, its own organizational structure. A 2023 study by Coveo found that 73% of employees said they don’t know where to find information they need, and 36% of workers spend more than an hour a day just searching for documents.
The fragmentation itself creates a secondary problem: nobody trusts any single source. When engineers learn that the Confluence page might be outdated, they stop checking Confluence. When the Google Doc might have been superseded by a Slack thread, they stop trusting the Google Doc. Eventually, the fastest path to a reliable answer is asking a person. And then you’re back to tribal knowledge dynamics.
Why search doesn’t save you
The natural response to fragmentation is “we’ll just improve search.” Enterprise search tools like Glean, Elastic Workplace Search, and others promise to index everything across your platforms. Some of them do a decent job of finding documents that contain specific keywords.
But search solves the wrong problem. The issue isn’t that engineers can’t find documents. It’s that they can’t trust the documents they find.
Consider what happens when a search returns three results for “API rate limiting policy”:
- A Confluence page from 2024 that says the limit is 1,000 requests per minute
- A Slack message from six months later saying the limit was raised to 5,000 for enterprise clients
- A GitHub PR description from last month that references a “new rate limiting approach” without specifying numbers
Which one is current? Search can’t tell you. It can find documents. It can’t tell you which document supersedes which, whether a decision was reversed, or whether the context that made a doc accurate has since changed. This is the decision supersession problem - and flat search is structurally incapable of solving it.
What engineers actually need isn’t better keyword matching. It’s a system that understands the relationships between pieces of knowledge: this document was updated by that Slack decision, this architecture was replaced by that RFC, this runbook depends on that configuration. Without those relationships, every search result is a guess.
The three symptoms that confirm the problem
You don’t need an audit to know if documentation distribution is hurting your team. Three symptoms are diagnostic:
Symptom 1: “Which one is the real one?” If anyone on your team has ever asked this about a document, you have a versioning problem that your tools can’t solve. Multiple copies of the same knowledge, in different states of decay, across different platforms. Nobody maintains all of them. Often nobody maintains any of them.
Symptom 2: New hires take weeks to find the documentation layer. Ask your most recent hire: how long did it take you to figure out where things are documented? Not to read the docs. Just to find them. If the answer is more than two days, your documentation isn’t organized around discoverability. It’s organized around the convenience of whoever wrote it. The onboarding cost of this is enormous.
Symptom 3: Senior engineers answer questions that docs should answer. Track how many Slack questions your senior engineers field in a week that could have been answered by existing documentation - if the asker knew it existed. Research shows each interruption costs 23 minutes of recovery time. If your staff engineer handles five of these daily, that’s nearly two hours of deep work destroyed. Not because the docs are bad. Because nobody can find them.
What actually fixes this
The conventional wisdom is consolidation: pick one platform, migrate everything, enforce compliance. This fails almost every time. Engineers chose different tools for real reasons. Forcing all technical decisions into Confluence doesn’t make Confluence better at hosting technical decisions. It just creates resentment and shadow documentation.
The approach that works is a knowledge layer that sits above your existing tools.
Index, don’t migrate. The knowledge in Slack, Confluence, Notion, Google Docs, and GitHub doesn’t need to move. It needs to be addressable from a single place. An index that spans platforms and understands which documents relate to which systems, which decisions replaced which earlier decisions, and which sources are current.
Model relationships, not just content. A document about the payment service’s retry strategy is useful. A document about the retry strategy that links to the team that owns it, the incident that prompted the change, the RFC that proposed it, and the three services affected by it is dramatically more useful. Flat document stores can’t represent these connections. Connected, chunked knowledge structures can.
Assign ownership, not authorship. Most docs decay because nobody is responsible for keeping them current. The person who wrote the doc moved teams. The team reorganized. The system was refactored. Assigning a document owner - someone accountable for its accuracy, not just its creation - changes the incentive. The owner doesn’t have to write every update. They just need to flag when the doc no longer reflects reality.
Make staleness visible. A document that was last updated 18 months ago isn’t necessarily wrong. But it should carry a visible signal that it hasn’t been reviewed. A simple “last verified” date, distinct from “last modified,” gives readers a trust signal. Some teams automate this with a bot that flags any doc not reviewed in the last 90 days. The mechanism matters less than the principle: staleness should be obvious, not something you discover after acting on outdated information.
The core problem
Documentation tools are built for writing. They’re not built for finding, connecting, or maintaining knowledge over time. Every platform optimizes for the author’s workflow: easy to create, easy to format, easy to share. None of them optimize for the reader’s workflow: easy to find, easy to trust, easy to see what connects to what.
This is the gap we built Nexalink to close - a graph based knowledge management system that connects your existing documentation into a navigable, queryable structure without requiring migration. But whether or not you use a tool like ours, the diagnosis is the same: if your engineers can’t find the documentation, it doesn’t matter how good the documentation is. Start by asking your newest team member where they go when they don’t know the answer. Their response will map the actual shape of your knowledge fragmentation.