AI monitoring dashboard

  • Updated

The AI Monitoring Dashboard provides a centralized view of all AI systems currently being monitored in your environment. It brings together prompt activity, open vulnerabilities, reported incidents, drift indicators, knowledge sources, and connected tools in a single cross-system view, allowing teams to quickly identify where review or intervention may be required.

This page is designed for ongoing operational oversight. Rather than focusing on one AI system at a time, it provides a macro view across the monitored estate so users can assess overall activity, identify concentration of risk, and move into detailed monitoring where needed.

Purpose

The AI Monitoring Dashboard supports responsible AI oversight by helping organizations:

  • maintain visibility across all monitored AI systems from one place
  • identify systems with elevated prompt volume, open vulnerabilities, or reported incidents
  • surface systems showing signs of drift or configuration change
  • understand the scale of connected knowledge sources and tools across monitored systems
  • move efficiently from portfolio-level visibility into detailed system-level investigation

This dashboard serves as the primary entry point for monitoring activity after an AI system has been onboarded for runtime oversight.

Overview of Page Sections

Summary cards

The top section of the dashboard presents high-level monitoring metrics across all monitored AI systems. These summary cards provide an immediate snapshot of activity and risk exposure across the environment.

The cards display:

  • Total Prompts – the total number of prompts captured across monitored systems
  • Vulnerabilities (Open) – the number of currently open vulnerabilities
  • Incidents Reported – the number of incidents raised from monitored AI activity
  • Monitored Systems – the number of AI systems actively included in monitoring

These measures help users quickly determine whether there has been an increase in monitored activity, a concentration of unresolved issues, or a broader need for investigation.

Filters and controls

The dashboard includes controls that help users refine the view and refresh monitoring data across systems.

These controls include:

  • Violation Status Filter – filters the vulnerability count by status
  • Search Systems – allows users to search for a monitored AI system by name
  • Refresh All Systems – triggers a full refresh across connected monitored systems so the dashboard reflects the latest prompts, detections, discovered knowledge sources, tools, and drift signals

These controls are especially useful when reviewing a large monitored environment or when validating whether new monitoring data has been captured.

Monitored AI Systems table

The main table provides a system-level breakdown of monitoring data across all imported AI systems. Each row represents a monitored AI system and surfaces the key indicators needed for triage and comparison.

The table includes:

  • System – the name of the monitored AI system
  • Prompts – the total number of prompts captured for that system
  • Violations – the number of detected vulnerabilities matching the selected status filter
  • Max Drift – the highest drift risk score currently associated with that system
  • Incidents – the number of incidents linked to that system
  • Knowledge – the number of discovered knowledge sources associated with that system
  • Tools – the number of discovered tools or connected capabilities associated with that system
  • Last Prompt – the relative timestamp of the most recent captured prompt

Viewed together, these fields allow users to compare systems side by side and identify which ones are most active, most exposed, or most likely to require immediate attention.

System drill-in

Selecting a system from the table opens that system’s detailed monitoring view. This enables users to move from macro-level oversight into system-specific review, including prompt monitoring, drift investigation, and knowledge or tool analysis.

How to use this page

A common workflow is to begin with the AI Monitoring Dashboard to understand the current state of monitored AI activity across the organization, then drill into the systems that show the strongest indicators of risk or change.

Users may use this page to:

  • review total monitored activity across all systems
  • identify systems with the highest number of open vulnerabilities
  • spot systems with elevated drift risk
  • check whether incidents have already been raised against a monitored system
  • compare the number of knowledge sources and tools associated with different systems
  • refresh monitoring data before beginning a review
  • open a system to investigate detailed monitoring results

This makes the dashboard useful both for routine governance review and for operational triage when teams need to quickly determine where to focus their attention first.

Notes

  • The AI Monitoring Dashboard is a cross-system monitoring page. It is intended to provide an aggregated view across all monitored AI systems rather than a detailed review of a single system.
  • Counts and indicators on this page are designed to support prioritization and triage. Detailed review should be completed from the individual system monitoring pages.
  • Drift, vulnerabilities, incidents, knowledge sources, and tools should be interpreted together when assessing overall system risk.
  • This dashboard plays a similar governance role to other central oversight pages in the platform by providing a single operational view across the monitored inventory.

Related Help Pages

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request