The discovery you skip is the rework you pay for.
The argument for compressing discovery is always reasonable. You already have the support tickets. You've spoken to customers before. The backlog is full. The competitor shipped something similar last quarter. What exactly is a discovery sprint going to tell you that you don't already know?
It's a coherent position. It is also, almost without exception, wrong — and the teams that find out tend to find out expensively.
What "we already know" is built on
The confidence that makes discovery feel unnecessary is usually resting on one of three things: anecdote, assumption, or analogy.
Anecdote: customers keep asking for a particular feature, so the team builds it. But asking for something and needing it are different. What are users actually trying to accomplish? What are they doing instead right now, and what does that workaround tell you about the real shape of the problem?
Assumption: the team has been in this market long enough that they know their users. Maybe they did, eighteen months ago. Markets shift. Expectations shift. A confident model of your user base that hasn't been tested recently is not knowledge — it's a hypothesis with good PR.
Analogy: a competitor shipped something similar, which validates the direction. Possibly. But you don't know what they learned in discovery, what trade-offs they made, or whether their implementation is actually working. You're copying a surface without access to the reasoning beneath it.
None of this means don't build. It means know what you're building and why — which requires knowing what you don't know, not assuming you know enough.
The cost sits downstream
In most large organisations, discovery is treated as a cost centre. Time and resource spent before anything visible exists. The implicit logic is that building is progress and discovery is delay.
The actual cost calculation runs in the other direction.
When discovery is compressed and the team builds in the wrong direction — solving a problem they didn't fully understand, using an approach that made sense given what they knew but not given what they could have known — the rework is structural. It's not a round of amends. It's a re-scope, a rebuild, a quarter of delivery debt that lands on a backlog already under pressure.
The pattern is consistent enough across product teams that it stops feeling like bad luck and starts looking like a structural outcome of how decisions are made. Discovery that gets cut in the name of velocity tends to produce exactly the kind of delay it was meant to avoid — just later, when the cost is higher and the context has been lost.
The question is never whether you'll pay for the gaps in your understanding. It's whether you pay before the build or after it.
Discovery as risk management, not research
The reframe that tends to work in corporate environments — and this matters, because the framing shapes how the conversation goes — is to stop positioning discovery as a design practice and start positioning it as a risk management function.
Those are not the same conversation. "We need time for user research" competes with delivery pressure and usually loses. "Three of the five assumptions underpinning this build are untested, and two of them carry significant rework risk if they're wrong" is a different kind of claim. It's operational. It's the language that senior stakeholders in commercial, engineering, and product functions understand and take seriously.
The Discovery Risk Radar makes this concrete. Before any significant build commitment, it maps the team's current state of knowledge across four dimensions — desirability, viability, feasibility, and usability — and identifies which assumptions are load-bearing and which are genuinely validated. Not as an academic exercise, but as a triage tool. Which gaps are cheap to test? Which would be expensive to get wrong? Where is the team building on solid ground, and where are they building on belief?
This isn't a ten-week research programme. In well-scoped problem spaces it can be done in a week. The point isn't the duration — it's the discipline of asking the question before committing the resource.
The assumptions that sink projects are the ones nobody named
Most build decisions that end in significant rework have one thing in common: a critical assumption that everyone on the team held, that nobody wrote down, that turned out to be wrong.
Not an assumption that was tested and failed. An assumption that was never acknowledged as an assumption at all. It sat underneath the plan as if it were a fact — right up until the moment the work revealed it wasn't.
Making assumptions visible is not the same as doubting the direction. It's the practice of honest planning. It creates a record of what the team believed at the point of decision, which matters for learning as much as it matters for risk management. Teams that do this consistently get better at distinguishing between well-grounded confidence and confident-sounding uncertainty.
That distinction is harder to maintain than it sounds. Especially in environments where moving with conviction is culturally valued and raising uncertainty can read as hesitation.
What this asks of a Lead Designer
You are probably not in a position to mandate discovery on every initiative. The planning cycle, the budget rhythm, and the stakeholder dynamics are not always in your favour.
What you can do is make the assumptions visible — even if you can't slow the build down. A short written document, shared with the team at the start of a sprint, that names what the plan is resting on and where the risk sits. Not a slide. A document. Something that creates accountability and can be referenced when the rework conversation happens.
Build the track record. When discovery surfaces something the team didn't know — and it will — make sure the learning is recorded and attributed. The pattern of "discovery found this, which avoided that" is the argument that earns you more runway next time. Make it visible.
And pick your battles. Not every initiative carries the same risk profile. A well-understood problem space with strong prior evidence is different from a novel surface in a new market segment. Use the Radar to triage, not to create discovery for its own sake.
The goal is not discovery as ritual. It's honest planning — knowing, with reasonable confidence, what you're building and why, before you commit the resources to build it.
That confidence is not free. But it is considerably cheaper than the alternative.
————————————————————
The Discovery Risk Radar is the Signature Framework in Playbook 02: Product Discovery — part of the Culwick Studio Playbook Series.