Please rotate your phone.

This experience is designed for portrait mode.

AI Product Tools  /  MIF Explorer  /  Library  /  UX

Truth Layer

Truth Layer

The Truth Layer is the badge system that tells you how trustworthy, directional, or risky a measure is.

Why it matters: It helps teams separate meaningful signals from vanity, misuse, or AI distortion before they optimize the wrong thing.

Example: A metric can be Meaningful, Leading, or Vanity Risk.

KPI UX DirectionalVanity RiskAI-Sensitive

Average Session Duration

The average time users spend in a single session.

Category: Engagement
Measurement class: KPI

Measurement Class

A measurement class tells you what kind of measure something is, not just what topic it covers.

Why it matters: It stops teams from building a stack full of only KPIs while ignoring value, governance, or AI signals.

Example: Governance Metric and AI Signal are two different measurement classes.

Frequency: Weekly
Back to library

Evaluation method

sum(session_durations) / total_sessions

Signal type

coincident

What it is best for

Content and media products where longer time indicates consumption

What it tells you +

How much time users invest per visit. Can indicate engagement depth for some product types.

What it does not tell you +

Tell you whether time spent was productive, enjoyable, or wasted on confusion.

When to use it +
  • Content and media products where longer time indicates consumption
  • Tracking session depth trends over time for the same product
When not to use it +
  • For task-oriented products where shorter sessions are better
  • As a primary engagement metric without context about what users did during the session
How leaders misuse it +
  • Celebrating longer sessions in a task-oriented product — users may be lost
  • Comparing session duration across fundamentally different product types
  • Using average instead of median, letting outlier sessions distort the picture
Anti-patterns +
  • Adding unnecessary steps or delays to extend session time
  • Counting inactive tabs as active sessions
AI interpretation risks +

Scenario: AI chatbot or assistant keeps users engaged in conversation

What happens: Session duration increases significantly

What it really means: Users may be spending time talking to the AI rather than accomplishing goals. Longer sessions may reflect dependency or confusion, not productive engagement.

Recommendation: Segment sessions by whether AI was used. Compare task completion rates between AI-heavy and AI-light sessions.

Companion entries +

This entry is stronger when paired with:

Conflicts and tension points +

Optimizing this entry alongside the following may create tension:

Instrumentation or evaluation guidance +

Cap session length at a reasonable maximum (30-60 minutes) to exclude forgotten tabs. Use median for accuracy.

Sample events

session_start, session_end
Examples +

A documentation site sees average session duration rise from 3 to 8 minutes after a reorganization. User feedback confirms longer sessions reflect deeper reading, not navigation confusion.

Suggested decisions +
  • Rising session duration: check if users are engaging deeper or getting stuck
  • Falling session duration: check if efficiency improved or value decreased