Please rotate your phone.

This experience is designed for portrait mode.

AI Product Tools  /  MIF Explorer  /  Library  /  UX

Truth Layer

Truth Layer

The Truth Layer is the badge system that tells you how trustworthy, directional, or risky a measure is.

Why it matters: It helps teams separate meaningful signals from vanity, misuse, or AI distortion before they optimize the wrong thing.

Example: A metric can be Meaningful, Leading, or Vanity Risk.

KPI UX MeaningfulLaggingAI-Sensitive

Task Success Rate

The percentage of users who complete a defined task without critical errors.

Category: Usability
Measurement class: KPI

Measurement Class

A measurement class tells you what kind of measure something is, not just what topic it covers.

Why it matters: It stops teams from building a stack full of only KPIs while ignoring value, governance, or AI signals.

Example: Governance Metric and AI Signal are two different measurement classes.

Frequency: Per release or sprint
Back to library

Evaluation method

successful_completions / total_attempts × 100

Signal type

lagging

What it is best for

Evaluating whether a redesign improved core workflows

What it tells you +

Whether people can accomplish what they came to do. A direct signal of whether the core experience works.

What it does not tell you +

Tell you why users failed, how much effort it took, or whether they felt confident during the process.

When to use it +
  • Evaluating whether a redesign improved core workflows
  • Comparing usability across product versions
  • Setting usability benchmarks for critical user journeys
When not to use it +
  • When tasks are poorly defined or have multiple valid endpoints
  • As a standalone metric without understanding failure reasons
  • For exploratory features where there is no single correct path
How leaders misuse it +
  • Celebrating high completion rates on tasks that are too easy to fail
  • Comparing rates across tasks with very different difficulty levels
  • Ignoring partial completions that still delivered value to the user
Anti-patterns +
  • Measuring only happy-path completions and excluding edge cases
  • Setting success criteria so loosely that almost everything counts
AI interpretation risks +

Scenario: AI assistant completes steps for the user

What happens: Task success rate inflates because the AI did the work

What it really means: High completion may reflect automation, not user capability or understanding

Recommendation: Track AI-assisted vs unassisted success separately. Add a user understanding check after AI-assisted completions.

Companion entries +
Instrumentation or evaluation guidance +

Define clear start and end events for each task. Track partial completions separately from full successes.

Sample events

task_started, task_completed, task_abandoned
Examples +

An e-commerce checkout has an 87% task success rate. The 13% failure is concentrated at the payment step, suggesting a form usability issue.

A SaaS onboarding wizard shows 62% completion. Segment analysis reveals new users with technical backgrounds complete at 85% while non-technical users complete at 41%.

Suggested decisions +
  • If below 80%, investigate failure points and prioritize UX fixes
  • If above 95%, check whether the task definition is too broad
  • Compare across user segments to find who struggles most