User Level Risk Overview

  • Updated

User Risk provides visibility into employee AI usage across your organization. Powered by the Microsoft Defender Shadow AI connector, this view helps you identify employees who may be using unregistered or high-frequency AI services and assess potential governance risks.

By cross-referencing Defender activity with your AI Inventory, User Risk highlights where AI tools are being accessed without formal registration, review, or approval — allowing you to take action early.

This functionality is available on paid plans with the Microsoft Defender integration enabled.

Purpose

User Risk enables you to:

  • Identify employees accessing AI services that are not registered in your AI Inventory

  • Detect high-frequency AI usage that may indicate elevated organizational risk

  • Add new employees into RexCommand if they are not yet registered

  • Take proactive action to govern AI usage before risks escalate

This view ensures AI usage is transparent, traceable, and aligned with your governance framework.

 

How It Works

User Risk is powered by the Microsoft Defender integration.

AI usage events detected by Defender are securely ingested into RexCommand and automatically cross-referenced against:

  • Your AI Inventory

  • Your monitored AI domain list

This comparison determines whether accessed services are registered, approved, or unregistered, and assigns a corresponding user risk level.

Main List View

The main User Risk page displays a list of employees detected through Microsoft Defender AI activity logs.

For each employee, you will see:

  • Risk Level – High, Medium, or Low based on usage patterns

  • Unregistered AI – The number of AI domains accessed that are not formally registered in your AI Inventory

If a detected employee has not yet been added to RexCommand, you will have the option to add them directly from this view.

Risk levels are automatically calculated based on AI usage activity and registration status.

Risk Levels Explained

Risk scoring is based on how frequently AI services are accessed and whether those services are registered in your AI Inventory.

High Risk

  • Any single AI service accessed more than 6 times, or

  • 5 or more unregistered AI domains accessed

Medium Risk

  • 3 to 4 unregistered AI domains accessed

Low Risk

  • 1 to 2 unregistered AI domains accessed

An AI domain is considered unregistered if it does not have an exact match in your AI Inventory. Services marked as “Possible Match” are still treated as unregistered until formally registered.

User Detail View

Selecting an employee opens a detailed breakdown of their AI usage.

From this view, you can:

  • See which AI domains were accessed

  • Review how frequently each service was used

  • Identify whether each domain maps to a registered AI system

  • Filter results by Approval Status to focus on approved, pending, or unregistered services

This detailed visibility allows you to investigate patterns, validate business use cases, and initiate governance actions where necessary..

Notes

  • Available on paid plans with Microsoft Defender enabled.

  • User Risk visibility depends on an active Microsoft Defender connection.

  • Only AI services detected through Defender activity will appear in this view.

  • “Possible Match” inventory entries must be formally registered to be considered governed.

  • Use this page to proactively identify shadow AI usage and strengthen organizational AI oversight.

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request