Security Guardrail
A collection of data identifiers that protect against prompt injection attacks, ensuring that prompts are safe for the AI model and will not cause the model to perform inappropriate activities such as deleting critical data, exposing sensitive data, or granting unauthorized access to resources in your environment.