SynchroniseLogin
← Blog

11 April 2026 · 7 min read

The 12 metrics PMs argue about most — and a versioning system that ends the arguments

TL;DR

The 12 metrics every PM team argues about. Three legitimate definitions of each. The argument doesn't end by picking the 'right' one — it ends when the team writes the chosen definition down, dates it, and tracks when it changes. Versioned metrics are what make decisions defensible across cycles.

The argument that never ends

Every cross-functional meeting eventually surfaces it.

'Wait, when you say churn, do you mean logo churn or revenue churn?'

'Is that DAU including the workspace bots or not?'

'How are we counting activation now — the old four-event funnel or the new single-event one?'

These aren't pedantic. They're the difference between a decision that holds and one that quietly inverts when someone updates a SQL query nobody told the team about.

The 12 contested metrics

  • DAU — does it include guest users? Bots? Internal accounts?
  • MAU — 28-day rolling, calendar month, or trailing 30 days?
  • Churn — logo, revenue, or net revenue? Annualised or monthly?
  • Retention — cohort-based or rolling? What's the unit of activity?
  • Activation — single key event, multi-event funnel, or time-to-value threshold?
  • NPS — promoter minus detractor, or weighted? Sample frame?
  • CSAT — top-box, top-2-box, or mean? Survey response rate handling?
  • Conversion — first-touch, last-touch, or position-based attribution?
  • LTV — gross or net of cost-to-serve? Discount rate?
  • ARPU — by cohort or all-users? Includes free tier?
  • Payback period — based on gross margin or revenue?
  • Gross margin — fully loaded with infra and support, or just COGS?

Why teams argue

Each metric has at least three legitimate definitions. None is universally correct.

Logo churn is the right number when you're optimising for customer count. Revenue churn is the right number when accounts have very different sizes.

First-touch attribution is the right model for top-of-funnel decisions. Last-touch is the right model for sales effectiveness.

The argument isn't really about which definition is correct. It's about which one your team has agreed to use — and whether everyone is using the same one.

The versioning system

Treat metric definitions like code.

Each metric has a definition record. Each record has a version, a date, an owner, and an exact specification — the SQL, the event filters, the cohort logic.

When the definition changes, bump the version.

Decisions cite the version in effect when they were made.

Six months later, when someone re-runs 'churn' and gets a different number, you can immediately see whether the metric drifted or the underlying business did.

What this looks like in practice

Most teams keep metric definitions in a Notion database or a dbt repo.

Either works. The discipline matters more than the tool.

The minimum viable version control is: a single doc per metric, a date, an owner, and a changelog at the bottom.

Synchronise treats metric definitions as workspace memory — versioned, dated, and linked to the decisions that depended on each version. When a definition changes, the decisions that cited the old definition flag for review.

You don't need the full system to start. You need a doc, a date, and the discipline to update both.

Questions

Who owns metric definitions?
One person per metric. Usually the head of the function most affected — head of product for activation, head of finance for LTV, head of marketing for attribution. Distributed ownership produces drift.
What's the right churn definition?
There isn't one. The right definition is the one your team has written down, dated, and is currently using. Pick the one that matches the decisions you're making — logo churn for customer-count decisions, revenue churn for ARR-impact decisions — and lock it.

Sources

Synchronise is the Cursor for Product Managers — an AI product operating layer that turns customer signal into evidence-backed PRDs, PBIs, briefs, and GTM artefacts.

Open the desk →