Knowledge & Tools Monitoring provides visibility into the knowledge sources and tools connected to a monitored AI system. It helps organizations understand what an AI system can access, establish an approved baseline, and compare future changes over time so new or modified connections can be reviewed in context.
This page is intended to support ongoing monitoring after onboarding. Rather than focusing on prompt content, it focuses on the supporting configuration around the AI system, particularly the external knowledge sources, tools, connectors, and related capabilities that may affect how the system behaves.
Purpose
Knowledge & Tools Monitoring helps organizations:
- discover the knowledge sources connected to a monitored AI system
- identify the tools, connectors, and capabilities available to that system
- establish a baseline for the approved configuration
- compare the current state against a previous baseline
- detect changes over time that may affect how the AI system responds or what it can access
- support governance review where connected sources or capabilities expand beyond the expected scope
This page is especially useful where organizations need to understand not only what an AI system is doing, but also what information and capabilities it has available to do it.
Overview of Page Sections
Knowledge Sources
The Knowledge Sources section shows the content sources attached to the AI system. These may include connected files, sites, tables, or other supported knowledge inputs. Users can run discovery to retrieve the current set of connected knowledge sources and review how they differ from a previously approved state.
Tools and Connectors
The Tools and Connectors section shows the operational capabilities connected to the AI system. This may include connectors, plugin actions, topics, custom GPTs, connection references, or other system capabilities depending on the monitored environment. This gives users visibility into what the AI system can do, not just what it knows.
Discover
The Discover action scans the connected AI system and retrieves the currently attached knowledge sources or tools. Where changes are found, the page can show differences such as newly added, removed, or changed items.
This allows users to periodically recheck the configuration and confirm whether the system remains aligned to its approved setup.
Set Baseline
The Set Baseline action snapshots the current configuration as a known reference point. This creates the approved state that future discoveries can be compared against.
Setting a baseline is one of the most important actions on this page. Without a baseline, discovery shows only the current state. With a baseline, users can understand how the system has changed over time.
Compare and Baseline History
The Compare control allows users to compare the current state of knowledge sources or tools against a selected historical baseline. The page can then show which items were added, removed, or remained unchanged. A Baseline History view provides access to previously saved baselines and their timestamps.
This supports change monitoring across the full lifecycle of the AI system rather than limiting review to a single point in time.
Why the baseline matters
Knowledge and tool monitoring is most valuable when it is used as a comparison workflow, not only as a discovery workflow.
After an AI system is imported, the first discovered configuration can be reviewed and saved as a baseline. From that point onward, later scans can be compared against that approved reference. This makes it easier to answer questions such as:
- Has the AI system gained access to a new knowledge source?
- Has a connector or tool been added since the last review?
- Has an approved source been removed or replaced?
- Has the overall capability of the AI system changed over time?
This is important because changes to knowledge and tools can directly affect system behavior, prompt outcomes, and drift risk.
How this supports monitoring
Knowledge & Tools Monitoring complements the other AI monitoring pages:
- Prompt Monitoring shows what users asked and how the AI system responded
- Prompt Monitoring Guardrails defines what should be detected as a violation
- Drift Analysis assesses whether the system is still operating in line with its intended purpose
- Knowledge & Tools Monitoring shows whether the system’s connected sources or capabilities have changed in ways that may explain those outcomes
Taken together, these views provide a more complete picture of how the AI system is operating and whether it remains within its expected scope.
Notes
- A baseline should be set once the current knowledge and tool configuration has been reviewed and accepted as the approved state.
- Baselines should be updated after an approved change, so future comparisons remain meaningful.
- Changes to knowledge sources or tools may influence prompt behavior, drift findings, and governance review outcomes.
- This page is intended to support change visibility and configuration oversight, not just one-time discovery.