Please rotate your phone.

This experience is designed for portrait mode.

AI Product Tools  /  MIF Explorer  /  Library  /  AI

Truth Layer

Truth Layer

The Truth Layer is the badge system that tells you how trustworthy, directional, or risky a measure is.

Why it matters: It helps teams separate meaningful signals from vanity, misuse, or AI distortion before they optimize the wrong thing.

Example: A metric can be Meaningful, Leading, or Vanity Risk.

AI Signal AI DirectionalLeadingAI-Sensitive

AI Suggestion Acceptance Rate

The percentage of AI-generated suggestions that users accept and apply.

Category: AI Quality
Measurement class: AI Signal

Measurement Class

A measurement class tells you what kind of measure something is, not just what topic it covers.

Why it matters: It stops teams from building a stack full of only KPIs while ignoring value, governance, or AI signals.

Example: Governance Metric and AI Signal are two different measurement classes.

Frequency: Weekly
Back to library

Evaluation method

accepted_suggestions / total_suggestions_shown × 100

Signal type

leading

What it is best for

Evaluating AI suggestion quality

What it tells you +

Whether the AI’s suggestions are relevant, trustworthy, and useful to users.

What it does not tell you +

Tell you whether accepted suggestions led to good outcomes or whether users understood what they accepted.

When to use it +
  • Evaluating AI suggestion quality
  • Identifying which types of suggestions users find most valuable
  • Measuring the impact of AI model improvements
When not to use it +
  • As a standalone quality metric without checking downstream outcomes
How leaders misuse it +
  • Treating high acceptance as proof of quality when users may be accepting defaults without review
  • Optimizing for acceptance rate rather than outcome quality
Anti-patterns +
  • Making suggestions hard to dismiss to inflate acceptance rates
AI interpretation risks +

Scenario: AI suggestions are the default option

What happens: Acceptance rate is artificially high because users rarely change defaults

What it really means: High acceptance may reflect default bias, not genuine preference or trust.

Recommendation: A/B test with non-default suggestions. Compare deliberate accepts vs passive non-rejection.

Companion entries +
Instrumentation or evaluation guidance +

Track suggestion type, context, and user segment. Distinguish explicit accepts from passive defaults.

Sample events

suggestion_shown, suggestion_accepted, suggestion_dismissed
Examples +

An AI email composer shows 72% suggestion acceptance. However, emails using AI suggestions receive 15% lower reply rates, suggesting the suggestions are convenient but not high quality.

Suggested decisions +
  • If acceptance rate is above 80%, check whether users are reviewing suggestions or blindly accepting
  • If acceptance rate is below 30%, investigate suggestion relevance and timing