RexCommand Release April 2026

  • Updated

Prompt Monitoring

The Prompt Monitoring feature gives organizations a dedicated workspace to review the prompts and responses captured for a monitored AI system, investigate detected violations, and escalate serious issues as incidents. 

Located under each monitored AI system, Prompt Monitoring groups prompts into conversation threads with turn order, color-codes alerts by type (conversation violations, sensitivity label findings, and standard policy violations), and provides a detail panel with full prompt and response content, session context, and the ability to reveal masked user identity when investigation requires it. Each reveal is captured in an Identity Access Log to maintain an auditable record. Violations can be reviewed, status-tracked through Open, Under Review, Dismissed, and Mitigated, and reported as incidents directly from the workflow.

Monitored AI systems are currently available through Microsoft 365 Copilot Chat and Copilot Studio, with support for Copilot GitHub and ChatGPT coming soon.

Prompt Monitoring is supported by three integrated capabilities:

  • Guardrails — Define the policies that drive detection across monitored AI systems. Guardrails combine default policies, custom rules, and sensitivity labels, and can be saved as reusable Guardrail Packages applied across multiple AI systems. Administrators can manage tenant-wide policy from the Guardrails → Prompt Policies page, and apply targeted overrides at the AI-system level when needed.
  • Knowledge & Tools Monitoring — Discover the knowledge sources, tools, and connectors attached to a monitored AI system, set an approved configuration as a baseline, and compare future scans against that baseline to surface added, removed, or changed items. Baseline history makes configuration drift visible across the lifecycle of the system.
  • Drift Analysis — Assess whether an AI system is still operating in line with its intended purpose. Each run produces a weighted risk score across four buckets — Input Drift, Behavior/Output Drift, Tool/Grounding Drift, and Config/Version Drift — mapped to a Low through Critical risk level, with drift events that include probable root cause, recommended actions, and the option to escalate as an incident.

Together, these capabilities give governance, security, and risk teams a defensible record of AI activity and the controls needed to act on it. This feature is available on Teams and Enterprise plan.
 


AI Monitoring Dashboard

The AI Monitoring Dashboard provides a centralized, cross-system view of all AI systems currently under runtime oversight (i.e Copilot Studio). It brings together prompt activity, open vulnerabilities, reported incidents, drift indicators, knowledge sources, and connected tools in a single page. (This feature is available on Teams and Enterprise Plan)

Summary cards surface total prompts, open vulnerabilities, reported incidents, and the number of monitored systems at a glance, while the Monitored AI Systems table breaks the same signals down by system — including max drift score, knowledge source count, connected tools, and last prompt timestamp — so administrators can compare systems side by side and identify where review or intervention is needed.

Filters, search, and a Refresh All Systems control make it easy to triage a large monitored estate, and selecting any system opens its detailed monitoring view for deeper investigation into prompts, drift, and connected knowledge or tools. 
Helpful link - Import a Copilot Studio Agent – RecordPoint


Unapproved AI Alerting

Unapproved AI Alerting strengthens Shadow AI oversight by continuously reconciling detected AI usage against your approved AI Inventory and surfacing context-rich alerts when unregistered AI systems appear in your environment.

Powered by Defender telemetry, every AI domain visit across the organization is matched against the AI Inventory — anything that doesn't match is flagged as unregistered and scored for risk. Administrators can configure the metrics that appear in digest reports from Notification Center → Settings → User Risk Alerts, including new AI sources discovered, new users accessing AI, total AI usage events, high-risk users detected, unapproved AI domains accessed, and most-used AI services.

Alerts can be delivered as in-app notifications (Immediate, Daily, Weekly, or Monthly) or as scheduled email rollups, and each digest includes a one-click View User Risk action that opens the User Risk dashboard for deeper investigation. Unapproved AI Alerting is available on Teams and Enterprise plans and requires the Defender Shadow AI connector.

 

 

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request