Please rotate your phone.

This experience is designed for portrait mode.

AI Product Tools  /  MIF Explorer  /  Library  /  AI

Truth Layer

Truth Layer

The Truth Layer is the badge system that tells you how trustworthy, directional, or risky a measure is.

Why it matters: It helps teams separate meaningful signals from vanity, misuse, or AI distortion before they optimize the wrong thing.

Example: A metric can be Meaningful, Leading, or Vanity Risk.

AI Signal AI MeaningfulLaggingAI-Sensitive

AI Task Completion Rate

The percentage of tasks where AI assistance leads to successful completion.

Category: AI Quality
Measurement class: AI Signal

Measurement Class

A measurement class tells you what kind of measure something is, not just what topic it covers.

Why it matters: It stops teams from building a stack full of only KPIs while ignoring value, governance, or AI signals.

Example: Governance Metric and AI Signal are two different measurement classes.

Frequency: Weekly
Back to library

Evaluation method

ai_assisted_completions / ai_assisted_attempts × 100

Signal type

lagging

What it is best for

Evaluating whether AI features are effective

What it tells you +

Whether the AI is actually helping users accomplish their goals.

What it does not tell you +

Tell you whether users understood the AI’s work or could have succeeded without it.

When to use it +
  • Evaluating whether AI features are effective
  • Comparing AI-assisted and unassisted success rates
  • Identifying task types where AI adds genuine value
When not to use it +
  • Without comparing to unassisted completion rates — a high AI rate is meaningless if unassisted is equally high
How leaders misuse it +
  • Celebrating high AI task completion without checking user understanding
  • Attributing all task success to the AI when users would have succeeded anyway
Anti-patterns +
  • Counting tasks as "AI-completed" when the AI merely provided a suggestion the user ignored
AI interpretation risks +

Scenario: AI completes tasks that users could have done themselves

What happens: AI completion rate looks high, but it’s solving easy problems

What it really means: The metric inflates the perceived value of AI by counting tasks users would have completed anyway.

Recommendation: Compare AI-assisted completion with unassisted baseline. The difference is the real value.

Companion entries +
Instrumentation or evaluation guidance +

Define "AI-assisted" clearly. Track what the AI contributed versus what the user did.

Sample events

ai_assist_started, ai_assist_accepted, task_completed_with_ai
Examples +

An AI code assistant achieves 78% task completion rate for debugging tasks. Unassisted debugging completes at 65%. The 13-point lift confirms genuine AI value for this use case.

Suggested decisions +
  • If AI completion rate significantly exceeds unassisted rate, the AI is adding real value
  • If the rates are similar, the AI may not be necessary for this task type