Drift Analysis

  • Updated

Drift Analysis helps organizations determine whether a monitored AI system is still operating in line with its intended purpose. It reviews recent prompt and response activity against the system’s stated purpose, approved uses, business goals, and historical activity, then produces a structured risk assessment to show where meaningful drift may be emerging.

This page is designed to identify more than isolated prompt issues. While Prompt Monitoring focuses on individual violations and conversations, Drift Analysis looks at whether the overall pattern of use, behavior, grounding, or configuration is moving away from what the AI system was meant to do. 

Purpose

Drift Analysis supports ongoing AI oversight by helping organizations:

  • assess whether users are interacting with an AI system within its intended scope
  • identify whether the system is responding in ways that remain aligned to its approved purpose
  • detect grounding or capability gaps that may indicate degraded usefulness over time
  • surface configuration-related drift that may require investigation
  • prioritize follow-up using a structured risk score and severity level

This page is intended for teams that need to move beyond single prompt review and understand whether an AI system is gradually shifting from its original role or expected performance.

How Drift Analysis works

When Run Drift Analysis is selected, the platform analyzes recent prompt and response activity for the AI system and compares that activity against the system’s defined purpose and baseline context. The current implementation evaluates four weighted drift buckets:

  • Input Drift (25%) – whether users are asking for things outside the system’s intended purpose
  • Behavior / Output Drift (35%) – whether the AI is responding in ways that move beyond or away from its intended role
  • Tool / Grounding Drift (25%) – whether responses suggest retrieval issues, stale information, unreliable grounding, or capability gaps
  • Config / Version Drift (15%) – whether there are signs of configuration changes, version mismatches, or prompt-level modifications

The weighted scores are combined into an overall risk score out of 100, which is then mapped to a risk level of Low, Medium, Medium-High, High, or Critical. The platform also applies additional safety logic where behavior or grounding signals are serious enough to warrant a higher floor.

What this page is for

Drift Analysis is useful when you want to understand whether an AI system is still aligned overall, not just whether a single prompt was problematic.

For example, this page can help identify when:

  • users are increasingly asking the system to do work outside its intended domain
  • the system is answering off-purpose requests instead of staying within scope
  • on-topic prompts are failing too often, suggesting a grounding or knowledge gap
  • configuration or connected-system changes may be affecting how the AI behaves
  • repeated patterns indicate the system is moving away from its approved purpose over time

It is best used alongside Prompt Monitoring and Knowledge & Tools Monitoring. Prompt Monitoring shows what happened at the interaction level, while Drift Analysis shows whether those interactions form a broader pattern of concern.

Overview of Page Sections

Run Drift Analysis

The page includes a Run Drift Analysis action that starts the analysis for the selected AI system. Once complete, the system refreshes the page and returns a risk score and supporting findings.

If no drift events have been recorded yet, the page displays an empty state prompting the user to run the analysis.

Summary card

After analysis, the page shows a summary view for the most recent run. The platform stores a daily drift summary for each AI system, including the current status, top drivers, decision summary, open event count, maximum risk, and most recent event timestamp.

This gives users a quick view of whether the system is currently operating normally, showing warning signs, or requires more urgent review.

Drift events

Each analysis can create one or more drift events for the monitored AI system. Drift events are stored with a unique drift ID, detected time, overall risk score, risk level, primary bucket, signals summary, bucket scores, probable root cause, recommended actions, owner, and workflow status.

These events form the main review record for drift monitoring.

Event detail and risk breakdown

Each drift event can be expanded to show:

  • the event ID and detection date
  • the current risk level and overall score
  • the primary drift bucket
  • the status
  • a signals summary
  • bucket-by-bucket scores
  • controls fired
  • probable root cause
  • recommended actions
  • owner information

This makes it easier to understand not only that drift was detected, but also what kind of drift is driving the result.

Status management

Drift events can be updated through a status workflow. Available statuses are:

  • Open
  • Under Review
  • Dismissed
  • Mitigated

This supports manual review and resolution without forcing a fixed operating model for every organization.

Incident escalation

Where a drift finding requires formal escalation, users can create an incident directly from the drift event. If an incident has already been linked, the page shows that it has already been reported.

This allows significant drift findings to move directly into the incident workflow when investigation or remediation needs to be tracked more formally.

Risk Scoring Model Reference

The page includes a Risk Scoring Model Reference section that explains the drift buckets, impact weights, control modifiers, escalation bump logic, and the meaning of each risk threshold.

This helps reviewers interpret the output consistently and understand what the score is intended to represent.

Relationship to Knowledge and Tools

Drift Analysis focuses primarily on whether observed usage and behavior remain aligned to the AI system’s purpose. However, drift can also be linked to changes in connected knowledge or tools. In the Knowledge & Tools area, comparing the current state to a baseline can create a drift event when configuration drift is detected.

This is important because drift is not always caused by user behavior alone. It can also result from changes to what the AI system knows, what it can access, or how it is configured.

Notes

  • Drift Analysis is intended to assess alignment at the system level, not replace prompt-by-prompt review.
  • A system can have prompt violations without meaningful purpose drift, and it can also show drift without a single severe prompt. These are related but different monitoring signals.
  • Capability gaps are treated as part of drift analysis where the system repeatedly fails to answer on-topic prompts it should reasonably be able to handle.
  • If there are no recent prompts in the normal analysis window, the platform can fall back to the most recent available prompt history. If no prompts exist at all, no analysis is performed.

Related Help Pages

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request