Please rotate your phone.

This experience is designed for portrait mode.

AI Product Tools  /  MIF Explorer  /  Library  /  UX

Truth Layer

Truth Layer

The Truth Layer is the badge system that tells you how trustworthy, directional, or risky a measure is.

Why it matters: It helps teams separate meaningful signals from vanity, misuse, or AI distortion before they optimize the wrong thing.

Example: A metric can be Meaningful, Leading, or Vanity Risk.

KPI UX DirectionalLagging

Support Ticket Volume

The number of support tickets or help requests submitted by users in a given period.

Category: Trust
Measurement class: KPI

Measurement Class

A measurement class tells you what kind of measure something is, not just what topic it covers.

Why it matters: It stops teams from building a stack full of only KPIs while ignoring value, governance, or AI signals.

Example: Governance Metric and AI Signal are two different measurement classes.

Frequency: Weekly
Back to library

Evaluation method

total_tickets_in_period

Signal type

lagging

What it is best for

Identifying features that cause the most confusion

What it tells you +

Where users are struggling enough to ask for help. A lagging indicator of UX friction.

What it does not tell you +

Capture users who struggled but gave up without contacting support.

When to use it +
  • Identifying features that cause the most confusion
  • Measuring impact of UX improvements on support burden
  • Estimating the cost of UX debt
When not to use it +
  • As a standalone quality metric — it only captures the subset of users who ask for help
How leaders misuse it +
  • Treating ticket reduction as evidence of UX improvement when it may reflect user resignation
Anti-patterns +
  • Making support harder to reach to reduce ticket volume
AI interpretation risks +

Scenario: AI chatbot handles support queries before they become tickets

What happens: Ticket volume drops

What it really means: Fewer tickets may mean AI resolved issues — or it may mean users got frustrated with the chatbot and gave up.

Recommendation: Track chatbot resolution rate and escalation rate. If chatbot containment is high but CSAT is low, users may be poorly served.

Companion entries +

This entry is stronger when paired with:

Instrumentation or evaluation guidance +

Categorize tickets by feature area and issue type. Track trend, not absolute number.

Examples +

After a navigation redesign, support tickets about "can't find [feature]" drop 40%. This confirms the redesign improved discoverability.

Suggested decisions +
  • Spike in tickets after a release: investigate for introduced regressions
  • Categorize tickets to find the #1 feature area driving support load