List view
Quick Start
Quick Start
User Guide
User Guide
Policies & GuardRails
Policies & GuardRails
Witness Anywhere: Remote Device Security
Witness Anywhere: Remote Device Security
Witness Attack
Witness Attack
Administrator Guide
Administrator Guide
Data Protection GuardRail
Data Protection is WitnessAI’s data leakage, anonymization, and control GuardRail. Its purpose is to safeguard confidential company information from being transmitted to AI models via prompts.
This GuardRail also ensures sensitive data is protected by tokenizing specific data types (e.g., US Social Security Numbers) before they are sent to the model, and then reconstituting them in any responses returned to the user.
This allows a transparent experience for the user, while preventing sensitive data from being sent to any AI Models or Applications that should not receive it.
When the GuardRail detects potentially sensitive or protected data, it provides administrators with options to Allow, Warn, Block, or Route the activity to a different AI Model with a customizable message.
WitnessAI Policies leverage the Data Protection GuardRail to prevent unauthorized data exposure, enforce compliance, and manage data securely within AI interactions.
Use Cases
Anonymizing Sensitive Data
Automatically tokenize and reconstitute sensitive information (e.g., SSNs, employee IDs) during AI interactions to ensure secure handling.
Preventing Data Leakage
Warn or block users from sending prompts containing company-sensitive or confidential information to external AI models.
Routing Sensitive Prompts
Redirect prompts involving protected data to an internal model designed for secure processing.
Using Data Protection Step-by-Step
Adding a Data Protection GuardRail
Configure the Data Protection GuardRail
1. Click to the GuardRails tab in the policy editor.
2. Select Data Protection GuardRail from the list of available GuardRails.
3. Enable the GuardRail for the policy.
Define GuardRail Actions
- Choose if to Anonymize sensitive data before sending to the AI Model.
- Specify the Action to take when the sensitive data is detected:
- Warn: Alert the user with a customizable message about the potential risk of data exposure.
- Block: Prevent the prompt from being sent to the model and notify the user with a blocking message.
- Route: Redirect the user’s prompt to a specific model or system for handling sensitive data securely.
- Customize the associated message to provide relevant guidance or warnings.
- Example Message: “Be careful what information you submit to external AI Models and Applications. Personally identifiable information and Proprietary data will be blocked to external AI destinations.”
- Add more Actions if desired.
- When finished, Click Save.
Test and Save the Policy
Test the policy configuration in a controlled environment to ensure it works as expected.
Once verified, save the policy to activate the Data Protection GuardRail for the assigned User Groups.
Best Practices
- User Education: Use warning messages to educate users about data security policies.
- Testing: Validate the configuration in test environments before rolling out policies to all users.
- Policy Documentation: Maintain detailed records of policy configurations for compliance audits.