Please rotate your phone.

This experience is designed for portrait mode.

AI Product Tools  /  MIF Explorer  /  Library  /  UX

Truth Layer

Truth Layer

The Truth Layer is the badge system that tells you how trustworthy, directional, or risky a measure is.

Why it matters: It helps teams separate meaningful signals from vanity, misuse, or AI distortion before they optimize the wrong thing.

Example: A metric can be Meaningful, Leading, or Vanity Risk.

KPI UX DirectionalLaggingVanity RiskMisused

Net Promoter Score (NPS)

A survey-based score measuring how likely users are to recommend the product, calculated as the difference between promoters and detractors.

Category: Trust
Measurement class: KPI

Measurement Class

A measurement class tells you what kind of measure something is, not just what topic it covers.

Why it matters: It stops teams from building a stack full of only KPIs while ignoring value, governance, or AI signals.

Example: Governance Metric and AI Signal are two different measurement classes.

Frequency: Quarterly
Back to library

Evaluation method

% promoters (9-10) - % detractors (0-6)

Signal type

lagging

What it is best for

Tracking overall sentiment trends over time

What it tells you +

A general temperature of user loyalty and perceived value. Widely used as a benchmark.

What it does not tell you +

Explain what drives satisfaction or dissatisfaction. It is a symptom, not a diagnosis.

When to use it +
  • Tracking overall sentiment trends over time
  • Benchmarking against industry peers
  • As one signal among many, never as the primary KPI
When not to use it +
  • As the sole measure of product quality or UX success
  • To diagnose specific problems — it tells you there is a problem, not what it is
  • To compare across very different product categories
How leaders misuse it +
  • Treating NPS as the definitive measure of customer satisfaction
  • Celebrating NPS improvements without understanding what changed
  • Comparing NPS across fundamentally different product types
  • Incentivizing teams on NPS, which creates score gaming rather than real improvement
Anti-patterns +
  • Survey timing manipulation — surveying only after positive experiences
  • Cherry-picking respondent segments to improve the number
Companion entries +
Instrumentation or evaluation guidance +

Survey consistently — same trigger point, same frequency. Include an open-text follow-up question.

Examples +

A SaaS product celebrates an NPS of +42 but CSAT for the billing experience is 2.1/5. The NPS masks a severe pain point because most users rarely interact with billing.

Suggested decisions +
  • NPS trend declining over 2+ quarters: conduct qualitative research to understand drivers
  • Do NOT set NPS targets for individual teams — this creates gaming behavior