Please rotate your phone.

This experience is designed for portrait mode.

AI Product Tools  /  MIF Explorer  /  Library  /  UX

Truth Layer

Truth Layer

The Truth Layer is the badge system that tells you how trustworthy, directional, or risky a measure is.

Why it matters: It helps teams separate meaningful signals from vanity, misuse, or AI distortion before they optimize the wrong thing.

Example: A metric can be Meaningful, Leading, or Vanity Risk.

KPI UX DirectionalLeadingAI-Sensitive

Feature Adoption Rate

The percentage of active users who use a specific feature within a given time period.

Category: Adoption
Measurement class: KPI

Measurement Class

A measurement class tells you what kind of measure something is, not just what topic it covers.

Why it matters: It stops teams from building a stack full of only KPIs while ignoring value, governance, or AI signals.

Example: Governance Metric and AI Signal are two different measurement classes.

Frequency: Monthly
Back to library

Evaluation method

users_who_used_feature / total_active_users × 100

Signal type

leading

What it is best for

Evaluating feature launch success

What it tells you +

Whether a feature is reaching the users it was designed for. A direct measure of feature product-market fit.

What it does not tell you +

Tell you whether users found the feature valuable or whether it solved their problem.

When to use it +
  • Evaluating feature launch success
  • Identifying underutilized features for improvement or deprecation
  • Prioritizing investment across feature areas
When not to use it +
  • To compare features with fundamentally different audience sizes
  • For features that are intentionally niche or expert-only
How leaders misuse it +
  • Treating low adoption as failure without considering discoverability
  • Comparing adoption rates of core features with optional power-user features
  • Using adoption as a proxy for satisfaction
Anti-patterns +
  • Adding aggressive prompts or forced tooltips to inflate adoption numbers
  • Counting accidental clicks as feature usage
AI interpretation risks +

Scenario: AI recommendation engine surfaces features automatically

What happens: Feature adoption rate rises because AI pushes users toward features

What it really means: Higher adoption may reflect AI-driven exposure, not user-initiated discovery. Users may not have sought the feature independently.

Recommendation: Track organic adoption separately from AI-prompted adoption. Compare retention across both groups.

Companion entries +

This entry is stronger when paired with:

Instrumentation or evaluation guidance +

Define "use" clearly — a single click versus meaningful engagement. Track per-feature over consistent time windows.

Sample events

feature_opened, feature_action_completed
Examples +

A design tool launches a new auto-layout feature. After 4 weeks, 23% of MAU have used it. Heatmap analysis shows the entry point is buried in a submenu.

Suggested decisions +
  • Below 10%: investigate discoverability. Is the feature easy to find?
  • Between 10-40%: check whether the feature matches the right user segment
  • Above 60%: feature is well-adopted. Measure quality of engagement next.