More about Risk Categories Provided by Cisco Foundation AI
- Prohibited Suppliers—AI models developed or authored by prohibited suppliers may present risks related to national security, data privacy, and influence operations. Depending on the development and deployment context, models in this category may be subject to regulatory oversight, data access laws, or government influence that raises concerns about embedded backdoors, surveillance capabilities or biased outputs shaped by geopolitical agendas.
- Copyleft License—AI models released under copyleft licenses can introduce legal and operational risks due to restrictive licensing terms. Copyleft licenses may require that derivative work, deployment dependent on the model, or indirect use must be released under the same license. This may present risks related to compliance with disclosure requirements of the license, legal exposure due to violation of license terms, and constraints on how the model can be integrated, scaled, or monetized.
- Code Execution—AI models capable of executing arbitrary code create security and reliability risks. If a model can generate and execute code dynamically, it may inadvertently or maliciously run harmful operations, access sensitive data, or interact with systems in unintended ways. A model's ability to execute arbitrary code bridges the gap between text generation and real-world impact, making it possible for a model to cause damage ranging from data breaches to system compromise.