Risk Triage

  • Updated

The Risk Triage feature enables organizations to perform a quick assessment of potential risks associated with each AI system in their inventory. It helps administrators identify whether further privacy or risk assessments are necessary based on system usage, data sensitivity, and operational impact.

This functionality is available within each AI System record under Risk Assessment → Risk Triage.

Purpose

Risk Triage provides a streamlined way to evaluate key risk factors for AI systems early in the governance process. By completing a short questionnaire, administrators can determine whether an AI system poses elevated privacy, ethical, or operational risks that require additional review.

This ensures that AI systems handling personal data, supporting critical operations, or deployed in high-impact domains are consistently flagged for oversight.

Key Features

Quick Risk Assessment

The Risk Triage form presents a concise set of indicators designed to surface high-level risk characteristics — such as data sensitivity, user exposure, and business criticality.
Users complete a brief questionnaire within the app to determine whether an AI system processes personal or sensitive data, serves external users, or supports functions essential to business continuity.

Based on responses, the system automatically displays contextual guidance such as:

  • Privacy Risk Assessment Necessary

  • Additional Risk Assessment Necessary

These prompts help ensure that higher-risk systems are escalated for more detailed evaluation.

AI Use Categories

To capture how AI is being applied, Risk Triage includes an AI Use checklist.
Users can select one or more categories that describe the system’s purpose or function — spanning domains such as biometric analysis, employment or eligibility decisions, critical infrastructure, healthcare, law enforcement, or generative media.

The full list includes:

  • Emotion recognition

  • Biometric data processing

  • Employment decisions

  • Customer eligibility or access

  • Credit or insurance underwriting

  • Justice, law enforcement, or immigration use

  • Manipulative or deceptive UX

  • Synthetic media without labels

  • Critical infrastructure control

  • Government automated decisions

Selecting applicable categories helps classify AI systems into regulated or higher-risk domains, supporting consistent governance and compliance alignment.

Connection Management

Each integration type includes setup fields (e.g., API keys, access tokens, tenant IDs).
Once configured, you can create, deactivate, or remove connections at any time.

Notes

  • Configuration: Administrators can manage which AI use categories appear in the triage checklist under Settings → AI Use Categories. Enabled categories will be available when categorizing AI systems. (All categories are enabled by default.)

  • Results: Once the triage is complete, the system provides immediate visual feedback to indicate if a Privacy Risk Assessment or Additional Risk Assessment is required.

  • Best practice: Complete Risk Triage for each new or updated AI system before approval or deployment to ensure a comprehensive risk baseline across your AI inventory.

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request