Back to Blog
| 7 min read

Why Do My Engineers Keep Asking the Same Questions?

By Norbert Wlodarczyk

If the same question shows up in Slack more than once, you don’t have a people problem. You have a routing problem.

Bar chart showing questions answered via Slack DM vs documented over 6 months, with repeat volume growing

The Slack repeat loop

Search your team’s Slack right now for “where is” or “how do I” or “does anyone know.” Scroll through the results. You’ll notice something uncomfortable: the same questions, asked by different people, weeks or months apart. Sometimes the previous answer is right there in the thread. Nobody found it.

This is the pattern we call knowledge routing friction - the gap between where an answer exists and where someone looks for it. Every time an engineer types a question into Slack that was already answered, two things have failed: the original answer wasn’t stored where future askers would look, and the organization has no mechanism to route the question to the answer instead of to a person.

A 2022 survey by Wakefield Research found that knowledge workers spend an average of 3.6 hours per day searching for information. In a 60-person engineering org, that’s 216 hours daily spent looking for things instead of building them.

Why the same questions keep surfacing

The obvious explanation is that people don’t search before asking. That’s occasionally true, but it misses the structural reasons this happens.

Slack is a river, not a lake. Messages flow past. Even if someone answered the exact question three months ago, Slack’s search is optimized for recency, not relevance. The answer is buried under thousands of messages. The engineer searching for “deployment process staging” gets 47 results, scans the first five, and gives up. Asking feels faster. It usually is.

Answers rot. The response from March might have been accurate in March. But the deployment process changed in May. The config was updated in July. Nobody went back to annotate the old thread. So even when someone finds the historical answer, they can’t trust it. They ask again to verify. This is rational behavior in an environment where contradicting documents are the norm, not the exception.

Context collapses in text. A Slack answer strips away the reasoning. “Just set the feature flag to true” doesn’t explain when to set it, why it defaults to false, or what breaks if you set it in production instead of staging. The next person who hits a similar-but-not-identical situation can’t adapt the answer because the context wasn’t preserved. So they ask what looks like the same question but is actually a different question wearing the same words.

The “who knows” index is invisible. In a 100-person org, a new engineer has no idea that the person who answered this question last time sits on a different team. They don’t know which Slack channel covers this domain. They don’t know whether the answer lives in Confluence, Notion, or a GitHub README. So they broadcast to the channel they’re in and wait.

The compounding cost

Each repeated question seems trivial in isolation. Five minutes to ask, five minutes to answer. But the costs compound in ways that don’t show up in any dashboard.

The answerer pays the highest price. Research by Gloria Mark at UC Irvine shows that a single interruption costs an average of 23 minutes to recover from. Your senior engineer who answers three repeat questions a day isn’t losing 15 minutes. She’s losing over an hour of deep work. Multiply that across a quarter and you’re looking at a significant chunk of your most experienced people’s output - evaporated.

Repeat questions signal trust erosion. When engineers stop trusting the docs and default to asking humans, the documentation gets even less traffic. Less traffic means fewer corrections. Fewer corrections mean more drift. More drift means less trust. This is the same reinforcing loop that drives tribal knowledge concentration - and it accelerates over time.

New hires absorb the culture of asking. An engineer who joins a team where “just ask in Slack” is the norm will never develop the habit of searching docs first. The behavior propagates. Within two quarters, your entire team’s default knowledge retrieval mechanism is “interrupt someone.” The docs could be perfect and nobody would know.

A study from the McKinsey Global Institute estimates that employees spend 19.7% of their work week searching for and gathering information. In a team of 80 engineers at a $200K fully-loaded cost, that’s roughly $3.2M annually spent on information retrieval. Not all of that is waste. But a significant portion is people searching for answers that already exist somewhere in the organization.

The system fix

The instinct is to tell people to search before asking. That doesn’t work. You’re asking humans to change behavior in an environment that rewards the old behavior. Instead, change the environment.

Build answer surfaces where questions happen. If your engineers ask questions in Slack, the answers need to be findable in Slack. That might mean a bot that suggests existing docs when a question pattern is detected. It might mean a channel convention where every answered question gets a threaded summary tagged with searchable keywords. The goal is to make finding the old answer at least as easy as asking again.

Apply the “second-time” rule. If a question gets asked once, a Slack answer is fine. If it gets asked a second time, the answer becomes a document. Not a wiki page that requires navigating to Confluence and finding the right space. A document that lives where people actually look. This simple filter catches the repeat patterns without creating documentation overhead for genuinely one-off questions.

Make knowledge relationships explicit. The reason engineers can’t find answers isn’t usually that the answers don’t exist. It’s that the answers aren’t connected to the questions in a way that’s navigable. The deployment process doc exists, but it doesn’t link to the feature flag doc, which doesn’t link to the service ownership doc, which doesn’t link to the incident that changed the process last month. When knowledge is stored as isolated documents, every search is a shot in the dark. When it’s stored as a network of connected concepts, one question leads naturally to related context.

Track the questions, not just the answers. Every repeat question is a data point. It tells you exactly where your knowledge infrastructure has a gap. If five different people ask about the staging deployment process in the same quarter, that’s not five communication failures. That’s one system failure with five symptoms. The companies that break the repeat loop are the ones that treat questions as diagnostics, not nuisances.

The difference between noise and signal

A healthy Slack channel has questions. That’s fine. New situations arise, edge cases appear, and some knowledge genuinely needs real-time human judgment.

But if you can find the same question asked three times in the last six months with three different people answering it - sometimes with slightly different information - that’s not communication. That’s an organization using its most expensive resource (senior engineer attention) as a search engine.

The fix isn’t better documentation habits. It’s restructuring how knowledge flows through your organization so that the answer finds the asker, not the other way around. That’s what we built Nexalink to do - connect knowledge into a queryable graph so the repeat loop breaks at the system level. But regardless of tooling, start by searching your Slack for your team’s most common questions. The pattern will tell you exactly where to focus.

Ready to get started?

Stop losing knowledge. Start connecting it.

See how NexaLink turns scattered documents into a structured, navigable knowledge graph your whole team can trust.