SynchroniseLogin
← Blog

25 April 2026 · 7 min read

The 5 sources of PM evidence — and how often each one lies to you

TL;DR

Every PM evidence source has a failure mode. Sales calls overweight loud customers. Support tickets overweight angry ones. Analytics shows what users did, not why. Surveys measure stated preferences, not revealed ones. Cohort data is only as good as your definitions. Triangulation is the only defence.

Why one source isn't enough

Every product decision in the wild is built on at least one of five evidence sources.

Each source captures a different angle of truth.

Each one also has a specific failure mode that, if you don't know it, will lead you to the wrong conclusion confidently.

Below: the five sources, what each one is good for, and the specific way each one will mislead you.

1. Sales calls

What it captures: what prospects and customers say about your product when a sales rep is on the line.

What it lies about: customers perform on sales calls. They emphasise pain to justify the meeting. They downplay friction they've already worked around.

Specific failure mode: sales-call-driven decisions overweight features that close deals but don't drive retention.

How to read it honestly: pair every sales-call insight with a behavioural metric from the same customer post-onboarding.

2. Support tickets

What it captures: what customers complain about when something hurts enough to file a ticket.

What it lies about: tickets are self-selected. Happy customers don't write them. Customers who've already churned definitely don't write them.

Specific failure mode: ticket-driven decisions optimise for the loudest 5% of users.

How to read it honestly: weight by customer segment. A ticket from an active enterprise account is not the same signal as a ticket from a free-tier user.

3. Product analytics

What it captures: what users actually did inside the product.

What it lies about: it shows action, not intent. A user who clicked through your onboarding might be confused, exploring, or compliant — the events look identical.

Specific failure mode: analytics-driven decisions are extremely good at optimising for engagement and extremely bad at distinguishing engagement from value.

How to read it honestly: pair every behavioural signal with a qualitative one. The analytics tells you what; the interview tells you why.

4. Surveys and NPS

What it captures: what customers say they prefer or value when asked directly.

What it lies about: stated preference is famously decoupled from revealed preference. The customer who rates a feature 9/10 may use it once a quarter.

Specific failure mode: survey-driven decisions confuse 'what people say they want' with 'what people will pay for or churn over.'

How to read it honestly: never use survey data as the sole justification for a decision. Use it to generate hypotheses, then validate against behaviour.

5. Behavioural cohorts

What it captures: groups of users segmented by traits or actions, tracked over time.

What it lies about: cohorts are only as good as their definitions, and definitions silently change. The 'active user' cohort from Q3 isn't the same shape as Q1.

Specific failure mode: cohort-driven decisions look rigorous but rest on definition stability that nobody is auditing.

How to read it honestly: version your cohort definitions. Date-stamp every analysis. Re-derive the cohort before re-citing it.

Triangulation is the only defence

Any decision built on one source is one degree away from confidently wrong.

Three sources from three different surfaces is the minimum bar.

When the sources agree, you have a real signal. When they disagree, you have a question worth investigating before you ship.

Synchronise treats this triangulation as the unit of work. Every insight surfaces what each source said and where they conflict — so the disagreement is visible before the decision, not surfaced after.

Questions

What if I only have one of these five?
Then your decision quality is bounded by that source's failure mode. Adding even a noisy second source is a step change in confidence.
Which source is the most reliable?
None of them, alone. The most reliable single source is behavioural cohort data with versioned definitions, but only when paired with at least one qualitative source to explain the why.

Sources

Synchronise is the Cursor for Product Managers — an AI product operating layer that turns customer signal into evidence-backed PRDs, PBIs, briefs, and GTM artefacts.

Open the desk →