Please rotate your phone.

This experience is designed for portrait mode.

AI Product Tools  /  MIF Explorer  /  Library  /  UX

Truth Layer

Truth Layer

The Truth Layer is the badge system that tells you how trustworthy, directional, or risky a measure is.

Why it matters: It helps teams separate meaningful signals from vanity, misuse, or AI distortion before they optimize the wrong thing.

Example: A metric can be Meaningful, Leading, or Vanity Risk.

KPI UX MeaningfulLeadingAI-Sensitive

Time on Task

The time it takes a user to complete a specific task from start to finish.

Category: Usability
Measurement class: KPI

Measurement Class

A measurement class tells you what kind of measure something is, not just what topic it covers.

Why it matters: It stops teams from building a stack full of only KPIs while ignoring value, governance, or AI signals.

Example: Governance Metric and AI Signal are two different measurement classes.

Frequency: Per release or sprint
Back to library

Evaluation method

task_end_timestamp - task_start_timestamp

Signal type

leading

What it is best for

Measuring efficiency improvements after a redesign

What it tells you +

How efficient the experience is. Whether users can accomplish goals without unnecessary friction.

What it does not tell you +

Tell you whether users enjoyed the experience or whether speed came at the cost of comprehension.

When to use it +
  • Measuring efficiency improvements after a redesign
  • Comparing alternative flows in A/B tests
  • Identifying bottleneck steps within multi-step processes
When not to use it +
  • For content-heavy experiences where longer time may mean deeper engagement
  • When comparing tasks with fundamentally different complexity
  • As a standalone efficiency metric without task success context
How leaders misuse it +
  • Assuming faster always means better without checking comprehension
  • Comparing time on task across very different user segments without normalizing
  • Using mean instead of median, letting outliers distort the picture
Anti-patterns +
  • Optimizing for speed at the expense of user confidence or understanding
  • Celebrating reduced time when users are actually skipping important steps
AI interpretation risks +

Scenario: AI auto-fills form fields or pre-populates answers

What happens: Time on task drops dramatically

What it really means: Reduced time may reflect AI doing the work, not a better user experience. Users may not understand what was submitted.

Recommendation: Separate AI-assisted time from unassisted time. Check whether users review AI-generated content before proceeding.

Companion entries +

This entry is stronger when paired with:

Conflicts and tension points +

Optimizing this entry alongside the following may create tension:

Instrumentation or evaluation guidance +

Use median, not mean — outliers from abandoned sessions skew averages heavily.

Sample events

task_started, task_completed
Examples +

Median time to complete a support ticket dropped from 4m 30s to 2m 15s after introducing AI-suggested responses. However, ticket reopen rate increased 20%, suggesting users accepted AI responses without review.

Suggested decisions +
  • If time increases after a release, check for added friction or confusion
  • If time drops sharply with AI features, verify user understanding is maintained