{"pageModel":{"attributes":{"id":"","name":"105476.dita","viewName":"DitaDetail"},"elements":{"ditaContent":{"name":"DITAContent","value":"<article id=\"monitor-ai-threats-and-events\" class=\"topic concept\">\r\n<h1 class=\"title topictitle1\">Monitor AI Threats and Events</h1>\r\n<div class=\"body conbody\">\r\n<p class=\"p\">The AI Defense  <a data-scope=\"local\" target=\"\" href=\"docs/ai-def/user/169233.dita\" title=\"View the log of AI runtime events showing violations of your AI safety and security policies\">Events page</a> and API responses from the AI Defense Inspection API and Inspection SDK provide runtime evaluation results for user prompts and LLM responses in your AI Defense environment. We refer to this monitoring and evaluation service as, \"AI Defense runtime.\"</p>\r\n<p class=\"p\">The  <a data-scope=\"local\" target=\"\" href=\"docs/ai-def/user/169233.dita\" title=\"View the log of AI runtime events showing violations of your AI safety and security policies\">Events page</a> of the AI Defense UI provides detailed event logs capturing AI-related activities such as detected prompts, responses, and  <a data-scope=\"local\" target=\"\" href=\"docs/ai-def/user/105473.dita\" title=\"\">policy rule matches</a>. Advanced filtering options allow you to filter the Events view by time period, application, or event type, enabling targeted monitoring and efficient analysis of AI events.</p>\r\n<section class=\"section\">\r\n<h2 class=\"title sectiontitle\">Runtime enforcement points</h2>\r\n<p class=\"p\">Before you begin monitoring AI threats and events, it's important to understand where your rules and policies are being enforced. It is these enforcement points that generate the events and/or API responses detailing each violation. AI runtime protection supports the following types of enforcement points:</p>\r\n<ul class=\"ul\">\r\n<li class=\"li\">\r\n<p class=\"p\">The  <a data-scope=\"local\" target=\"\" href=\"docs/ai-def/user/105487.dita\" title=\"\">AI Defense Gateway</a> is a cloud-based gateway that intercepts prompt and response traffic on the connection. Runtime protection evaluates content based on the policy you applied to the connection. Evaluation results appear in the  <a data-scope=\"local\" target=\"\" href=\"docs/ai-def/user/169233.dita\" title=\"View the log of AI runtime events showing violations of your AI safety and security policies\">Events page</a>.</p>\r\n</li>\r\n<li class=\"li\">\r\n<p class=\"p\"> <a data-scope=\"local\" target=\"\" href=\"docs/ai-def/user/105488.dita\" title=\"\">Multicloud Defense monitoring</a> intercepts prompt and response traffic in the VPC. Runtime protection evaluates content based on the Multicloud Defense AI Guardrails profile you applied to the VPC. Evaluation results appear in the  <a data-scope=\"external\" target=\"_blank\" href=\"https://securitydocs.cisco.com/docs/mcd/user/97501.dita\" title=\"\">AI Guardrails Logs</a> of Multicloud Defense.</p>\r\n</li>\r\n<li class=\"li\">\r\n<p class=\"p\"> <a data-scope=\"local\" target=\"\" href=\"docs/ai-def/user/105486.dita\" title=\"\">API-enforced runtime protection</a> is performed as its name suggests: when you invoke the API endpoint. You send prompt and/or response data to the  <a data-scope=\"external\" target=\"_blank\" href=\"https://developer.cisco.com/docs/ai-defense/\" title=\"\">AI Defense Inspection API endpoint</a> or the  <a data-scope=\"external\" target=\"_blank\" href=\"https://github.com/cisco-ai-defense/ai-defense-python-sdk\" title=\"\">Inspection SDK</a>, and the API returns a response with its evaluation. An event is also logged.</p>\r\n</li>\r\n</ul>\r\n</section>\r\n<section id=\"aidef-rt-invoke-api\" class=\"section\">\r\n<h2 class=\"title sectiontitle\">Invoke AI Runtime evaluation via the API</h2>\r\n<div class=\"sectiondiv\">\r\n<strong class=\"ph b\">How API-enforced evaluation works</strong>\r\n<p class=\"p\">You can evaluate prompts and responses by calling the  <a data-scope=\"external\" target=\"_blank\" href=\"https://developer.cisco.com/docs/ai-defense-inspection/introduction/\" title=\"\">AI Defense Inspection API endpoint</a> or the  <a data-scope=\"external\" target=\"_blank\" href=\"https://github.com/cisco-ai-defense/ai-defense-python-sdk\" title=\"\">Inspection SDK</a>. The typical API-enforced runtime protection usage pattern is as follows:</p>\r\n<ol class=\"ol\">\r\n<li class=\"li\">\r\n<p class=\"p\">Your application calls the AI Defense Inspection API or Python SDK ( <a data-scope=\"external\" target=\"_blank\" href=\"https://developer.cisco.com/docs/ai-defense-inspection/inspect-conversations/\" title=\"\">POST /api/v1/inspect/chat</a> in the REST API or <code class=\"ph codeph\">inspect</code> or <code class=\"ph codeph\">inspect_prompt</code> in the Python SDK). Your API call supplies the user prompt and/or model response that you want to evaluate.</p>\r\n</li>\r\n<li class=\"li\">\r\n<p class=\"p\">AI Defense runtime makes its evaluation based on the policy you've applied to the connection, or based on the rules you specified in the API request.</p>\r\n<ul class=\"ul\">\r\n<li class=\"li\">\r\n<p class=\"p\">If the connection has a policy attached to it, then the <code class=\"ph codeph\">enabled_rules</code> parameter is ignored. Any violations of the policy will be reported in the API response and will generate an event in the  <a data-scope=\"local\" target=\"\" href=\"docs/ai-def/user/169233.dita\" title=\"View the log of AI runtime events showing violations of your AI safety and security policies\">Event log</a>.</p>\r\n</li>\r\n<li class=\"li\">\r\n<p class=\"p\">If the connection has no policy attached to it, then you can evaluate content's compliance with <strong class=\"ph b\">specific rules</strong>. To do this, specify the rules in the <code class=\"ph codeph\">enabled_rules</code> parameter of the <code class=\"ph codeph\">inspect/chat</code> API call. Violations will be reported in the API response only. <strong class=\"ph b\">Important:</strong> No event will be generated in the event log!</p>\r\n</li>\r\n</ul>\r\n</li>\r\n<li class=\"li\">\r\n<p class=\"p\">Based on AI Defense runtime's evaluation results, your application can return your model's response as-is, block the response, or modify it. It's important to note that in API-enforced usage, AI Defense runtime does not block prompts or responses; that responsibility is left to your application.</p>\r\n</li>\r\n<li class=\"li\">\r\n<p class=\"p\">If you evaluated the content against a policy and a violation was detected, the violation is recorded in the Event log. If you evaluated rules only, then the results are provided only in the return payload.</p>\r\n</li>\r\n</ol>\r\n</div>\r\n<div class=\"sectiondiv\">\r\n<strong class=\"ph b\">Prerequisite</strong>\r\n<p class=\"p\">Set up AI Defense runtime as explained in  <a data-scope=\"local\" target=\"\" href=\"docs/ai-def/user/105486.dita\" title=\"\">Set up Runtime for the Inspection API</a>.</p>\r\n</div>\r\n<div class=\"sectiondiv\">\r\n<strong class=\"ph b\">Procedure</strong>\r\n<p class=\"p\">To test prompts and responses against your rule(s) or policy, call the AI Defense Inspection API endpoint as follows:</p>\r\n<ul class=\"ul\">\r\n<li class=\"li\">\r\n<p class=\"p\">Use the API key generated in the  <a data-scope=\"local\" target=\"\" href=\"docs/ai-def/user/105486.dita\" title=\"\">Set up Runtime for the Inspection API</a> steps earlier.</p>\r\n</li>\r\n<li class=\"li\">\r\n<p class=\"p\">Call the AI Defense Inspection API, supplying the prompt and/or response you want to inspect. Use one of the following approaches:</p>\r\n<ul class=\"ul\">\r\n<li class=\"li\">\r\n<p class=\"p\">call the  <a data-scope=\"external\" target=\"_blank\" href=\"https://developer.cisco.com/docs/ai-defense-inspection/introduction/\" title=\"\">AI Defense Inspection API</a> via REST, or</p>\r\n</li>\r\n<li class=\"li\">\r\n<p class=\"p\">use the  <a data-scope=\"external\" target=\"_blank\" href=\"https://github.com/cisco-ai-defense/ai-defense-python-sdk\" title=\"\">AI Defense Python SDK</a>.</p>\r\n</li>\r\n</ul>\r\n</li>\r\n</ul>\r\n<p class=\"p\">Your Inspection API endpoint address will be similar to the following. The example URL below includes the <code class=\"ph codeph\">us.</code> subdomain that specifies the <code class=\"ph codeph\">us-west-2</code> AWS region. Replace this subdomain with the code for your region. For example:</p>\r\n<pre class=\"pre codeblock\">\r\n<code>https://us.api.inspect.aidefense.security.cisco.com/api/v1/inspect/chat</code>\r\n</pre>\r\n<p class=\"p\">The regional URLs are listed on the page,  <a data-scope=\"local\" target=\"\" href=\"docs/ai-def/user/122511.dita\" title=\"Create API keys to connect to the AI Defense Management API\">API Keys and URLs</a>. See the section, \"Base URLs for the Inspection API\".</p>\r\n<table class=\"olh_note\" border=\"0\" role=\"note\">\r\n<tbody>\r\n<tr>\r\n<td width=\"5%\" class=\"olh_note\" role=\"heading\" border=\"0\" valign=\"top\">\r\n<img src=\"https://www.cisco.com/c/dam/en/us/td/i/esp/icons/icon-notes.svg\">\r\n<br> </td>\r\n<td border=\"0\" class=\"olh_note\">\r\n<div class=\"note__content\">\r\n<p class=\"p\">\r\n<strong class=\"ph b\">Important!</strong> When you use the API to check compliance with a <strong class=\"ph b\">policy</strong>, violations are reported both as Events in the Event log and as return values in the API response body. In contrast, when you use the API to check compliance with a <strong class=\"ph b\">rule</strong> or rules, violations are returned only in the API response body.</p>\r\n</div>\r\n</td>\r\n</tr>\r\n</tbody>\r\n</table>\r\n</div>\r\n</section>\r\n</div>\r\n</article>\r\n","ditaVal":"","format":"html"},"bookTitle":{"value":""},"shortDescription":{"value":""}}},"parameters":{"appId":"SccAiDefense","topicAlias":"AIEvents"}}