Please rotate your phone.

This experience is designed for portrait mode.

AI Product Tools  /  MIF Explorer  /  Library  /  UX

Truth Layer

Truth Layer

The Truth Layer is the badge system that tells you how trustworthy, directional, or risky a measure is.

Why it matters: It helps teams separate meaningful signals from vanity, misuse, or AI distortion before they optimize the wrong thing.

Example: A metric can be Meaningful, Leading, or Vanity Risk.

KPI UX MeaningfulLeading

Customer Effort Score (CES)

A survey-based score measuring how much effort a user had to exert to complete a task or interaction.

Category: Trust
Measurement class: KPI

Measurement Class

A measurement class tells you what kind of measure something is, not just what topic it covers.

Why it matters: It stops teams from building a stack full of only KPIs while ignoring value, governance, or AI signals.

Example: Governance Metric and AI Signal are two different measurement classes.

Frequency: Per interaction
Back to library

Evaluation method

Average score on 1-7 scale (1 = very low effort, 7 = very high effort)

Signal type

leading

What it is best for

Measuring effort reduction after UX improvements

What it tells you +

Whether the experience is effortless or burdensome. Low effort is the strongest predictor of loyalty.

What it does not tell you +

Reveal what caused the effort or how to reduce it.

When to use it +
  • Measuring effort reduction after UX improvements
  • Identifying the most effortful parts of the experience
  • Predicting churn — high-effort experiences drive users away
When not to use it +
  • For complex tasks where some effort is inherently necessary and expected
How leaders misuse it +
  • Averaging CES across all interactions without context
Anti-patterns +
  • Reducing effort by removing necessary features or safeguards
AI interpretation risks +

Scenario: AI handles complex steps for users

What happens: CES improves dramatically because the AI reduces perceived effort

What it really means: Lower effort scores may reflect AI taking over rather than better design. If the AI fails or is unavailable, effort returns to baseline.

Recommendation: Measure CES with and without AI assistance. If the gap is extreme, the product may be too dependent on AI for basic usability.

Companion entries +
Instrumentation or evaluation guidance +

Ask immediately after task completion: "How easy was it to [specific task]?" Use a 7-point scale.

Examples +

A support flow has CES of 5.8. After implementing AI-suggested solutions, CES drops to 2.4. However, resolution quality declines 15%, suggesting users accept AI answers without verifying.

Suggested decisions +
  • CES above 5 on critical tasks: treat as a high-priority UX problem
  • CES below 3: strong usability. Maintain and protect this experience.