Please rotate your phone.

This experience is designed for portrait mode.

AI Product Tools  /  MIF Explorer  /  Library  /  UX

Truth Layer

Truth Layer

The Truth Layer is the badge system that tells you how trustworthy, directional, or risky a measure is.

Why it matters: It helps teams separate meaningful signals from vanity, misuse, or AI distortion before they optimize the wrong thing.

Example: A metric can be Meaningful, Leading, or Vanity Risk.

KPI UX MeaningfulLagging

System Usability Scale (SUS)

A standardized 10-question survey that produces a composite usability score from 0 to 100.

Category: Usability
Measurement class: KPI

Measurement Class

A measurement class tells you what kind of measure something is, not just what topic it covers.

Why it matters: It stops teams from building a stack full of only KPIs while ignoring value, governance, or AI signals.

Example: Governance Metric and AI Signal are two different measurement classes.

Frequency: Quarterly or per study
Back to library

Evaluation method

SUS algorithm: sum adjusted question scores × 2.5

Signal type

lagging

What it is best for

Benchmarking perceived usability over time

What it tells you +

A reliable benchmark of perceived usability that can be compared across products, industries, and time.

What it does not tell you +

Pinpoint specific usability issues. It measures perception, not observed behavior.

When to use it +
  • Benchmarking perceived usability over time
  • Comparing before and after a major redesign
  • Cross-product or cross-competitor comparisons
When not to use it +
  • To identify specific usability problems — it is a general score, not diagnostic
  • With fewer than 12 respondents — the score will not be reliable
How leaders misuse it +
  • Treating any score above 68 as "good" without checking the industry benchmark
  • Using SUS as the only usability metric without behavioral data
  • Modifying the questionnaire and still calling it SUS
Anti-patterns +
  • Administering SUS without a specific task context, making responses vague
Companion entries +
Instrumentation or evaluation guidance +

Administer immediately after a task session. Use the standard 10-question format without modification.

Examples +

A B2B SaaS product scores 54 on SUS. After a navigation redesign, the score rises to 72. The 18-point improvement confirms the redesign addressed perceived usability issues.

Suggested decisions +
  • Score below 50: serious usability problems. Conduct task-based testing immediately.
  • Score 50-68: below average. Identify and fix the top friction points.
  • Score above 80: strong usability. Shift focus to engagement and adoption.