Tool Sprawl Is Not a Knowledge Strategy
By Norbert Wlodarczyk
Your team added Confluence, then Notion, then a wiki in GitHub, then a Slack bot that indexes everything. Finding information has never been harder.
The integration paradox
A VP of Engineering at a Series C company told me their stack includes 14 tools that touch documentation or knowledge in some form. Confluence for architecture docs. Notion for product specs. Google Docs for RFCs. Slack for real-time decisions. Linear for project context. GitHub for code-level docs. Loom for walkthroughs. A shared drive for PDFs from compliance. And six more niche tools adopted by individual teams to solve local pain.
When I asked how engineers find information, he paused. “They ask someone.”
Fourteen tools. Millions in annual licensing. And the dominant retrieval mechanism is still tapping a senior engineer on the shoulder.
This is tool sprawl - the accumulation of platforms that each solve a narrow documentation need while collectively making knowledge harder to find. Every new tool adds a silo. Every silo adds a search surface. Every search surface adds cognitive overhead. And at some point, the cost of navigating the tooling exceeds the cost of just asking a person.
How tool sprawl happens
Nobody sets out to fragment their knowledge base. Each tool adoption is individually rational.
Teams optimize locally. The design team picks Notion because it handles embedded media better than Confluence. The platform team uses GitHub wikis because the docs live next to the code. The data team adopts a separate knowledge base because their workflows don’t fit the engineering wiki structure. Each decision makes sense in isolation. In aggregate, knowledge is now distributed across platforms with no shared index, no common taxonomy, and no way to trace relationships across boundaries.
Integrations create the illusion of connection. A 2023 report from Productiv found that the average enterprise uses 371 SaaS applications, up 32% from 2021. Many of these offer integrations - Slack notifications from Jira, Confluence cards in Notion, Google Drive links in Linear. These integrations create the feeling that tools are connected. But piping a notification from one tool to another isn’t knowledge integration. It’s alert forwarding. The information still lives in the source system, governed by the source system’s search, permissions, and structure.
Nobody owns the full picture. Individual teams own their tools. IT manages licenses. But nobody is accountable for the meta-question: can an engineer who doesn’t already know where something is documented find it within five minutes? That role - knowledge architect, information strategist, whatever you call it - doesn’t exist in most engineering orgs. So the problem grows without a feedback loop.
The compounding cost of one more tool
Each additional tool imposes three costs that rarely appear in the procurement decision.
Search fragmentation. An engineer looking for the rate-limiting policy now has to search Confluence, Notion, Slack, and GitHub. Each platform has different search syntax, different ranking algorithms, different ways of handling permissions. Research from McKinsey shows that employees spend 19.7% of their work week searching for and gathering information. In a tool-sprawl environment, that number goes up because the search isn’t one search - it’s four or five sequential searches across platforms, each returning partial results.
Trust collapse. When the same topic appears in three tools with three different answers, engineers stop trusting any of them. This is the documentation distribution problem at its worst. A wiki page says the timeout is 30 seconds. A Slack thread from last month says it was raised to 60. A comment in the codebase says “see wiki for current value.” Which is right? The engineer can’t tell without reading the code - which defeats the purpose of having documentation at all.
Context loss between platforms. Knowledge doesn’t exist in a vacuum. A decision about API versioning connects to the incident that prompted it, the RFC that proposed it, the team that owns the service, and the three downstream consumers that need to adapt. In a single system, these connections can be modeled. Across five tools, they can’t. The RFC is in Google Docs. The incident is in PagerDuty. The team ownership is in a spreadsheet. The downstream impact is in a Slack thread. No tool holds the full picture, and no integration reconstructs it.
Why “better search” isn’t the answer
The predictable response to tool sprawl is a meta-search layer. Products like Glean, Guru, and Elastic Workplace Search promise to index across all your platforms and surface relevant results from a single query.
These tools help. They genuinely reduce the time spent switching between search interfaces. But they solve surface-level findability without addressing the structural problem.
Search returns documents, not knowledge. When an engineer searches for “authentication flow,” a cross-platform search tool returns a list of documents that contain those keywords. It doesn’t tell you which document is current. It doesn’t know that the Notion doc was superseded by a Slack decision two weeks later. It can’t show you that the auth flow depends on three other services whose configurations changed last quarter. You get results. You don’t get understanding.
Search can’t model relationships. The value of knowledge isn’t just in individual documents - it’s in how they connect. The deployment runbook is useful. The deployment runbook linked to the service dependency map, the last three incident postmortems, and the config change that altered step four is dramatically more useful. Flat search across multiple tools returns a list. It doesn’t return a graph. And most engineering knowledge is better represented as a graph than a list.
Search doesn’t solve the freshness problem. A document from 18 months ago that ranks first in search results isn’t necessarily wrong. But without a signal of freshness - when it was last verified, whether it’s been superseded, who owns its accuracy - the engineer either trusts it blindly or ignores it entirely. Both outcomes are bad.
What actually works
The solution isn’t adding another tool. It’s changing how you think about knowledge architecture.
Audit your current tool overlap. Map which platforms hold which types of knowledge. You’ll likely find that 40-60% of your documentation surface is redundant - the same topics covered in different tools at different levels of accuracy and freshness. The overlap isn’t productive redundancy. It’s confusion.
Define canonical sources. For each domain of knowledge (service architecture, operational runbooks, product specs, team processes), designate one canonical location. Not one tool for everything - that ship has sailed. But one source of truth per knowledge domain, with a clear expectation that the canonical source is maintained and everything else is a reference.
Connect, don’t consolidate. Migration projects fail. They take months, create resentment, and produce a brief period of order followed by a slow return to sprawl. The better approach is an index layer that models relationships across your existing tools without requiring anyone to move their documents. The knowledge stays where it was created. But the connections between pieces - this RFC replaced that architecture doc, this runbook depends on that config, this decision was made by that team - become explicit and navigable.
Measure findability, not coverage. Most teams measure documentation health by volume: how many pages in the wiki, how many runbooks updated this quarter. The metric that matters is findability: can an engineer who doesn’t know the answer find it in under five minutes without asking a person? Run this test monthly with recent hires. Their success rate is the truest measure of whether your knowledge infrastructure works.
The real problem
Tool sprawl is a symptom. The disease is treating documentation as a writing problem instead of a retrieval problem. Every platform optimizes for the author: easy to create, easy to format, easy to share. None of them optimize for the reader who arrives six months later, doesn’t know which tool to search, and can’t tell which of three conflicting results is current.
Adding another tool to fix the problem that tools created is a loop that ends with 14 platforms and engineers still asking each other the same questions. Breaking the loop requires modeling knowledge as a connected structure - where relationships, freshness, and provenance are first-class properties, not afterthoughts.
This is the problem we built Nexalink to address. A graph based knowledge management system that spans your existing tools, models the connections between documents, decisions, and systems, and gives engineers a way to navigate knowledge without memorizing where everything lives. But regardless of tooling, the diagnostic is simple: count how many places an engineer would need to search to find the answer to a cross-cutting question. If the number is more than one, tool sprawl is already costing you more than the tools are worth.