CWE

Common Weakness Enumeration

A community-developed list of SW & HW weaknesses that can become vulnerabilities

New to CWE? click here!
CWE Most Important Hardware Weaknesses
CWE Top 25 Most Dangerous Weaknesses
Home > CWE List > CWE-1426: Improper Validation of Generative AI Output (4.16)  
ID

CWE-1426: Improper Validation of Generative AI Output

Weakness ID: 1426
Vulnerability Mapping: DISCOURAGED This CWE ID should not be used to map to real-world vulnerabilities
Abstraction: Base Base - a weakness that is still mostly independent of a resource or technology, but with sufficient details to provide specific methods for detection and prevention. Base level weaknesses typically describe issues in terms of 2 or 3 of the following dimensions: behavior, property, technology, language, and resource.
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers. For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts. For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers. For users who wish to see all available information for the CWE/CAPEC entry. For users who want to customize what details are displayed.
×

Edit Custom Filter


+ Description
The product invokes a generative AI/ML component whose behaviors and outputs cannot be directly controlled, but the product does not validate or insufficiently validates the outputs to ensure that they align with the intended security, content, or privacy policy.
+ Common Consequences
Section HelpThis table specifies different individual consequences associated with the weakness. The Scope identifies the application security area that is violated, while the Impact describes the negative technical impact that arises if an adversary succeeds in exploiting this weakness. The Likelihood provides information about how likely the specific consequence is expected to be seen relative to the other consequences in the list. For example, there may be high likelihood that a weakness will be exploited to achieve a certain impact, but a low likelihood that it will be exploited to achieve a different impact.
Scope Impact Likelihood
Integrity

Technical Impact: Execute Unauthorized Code or Commands; Varies by Context

In an agent-oriented setting, output could be used to cause unpredictable agent invocation, i.e., to control or influence agents that might be invoked from the output. The impact varies depending on the access that is granted to the tools, such as creating a database or writing files.

+ Potential Mitigations

Phase: Architecture and Design

Since the output from a generative AI component (such as an LLM) cannot be trusted, ensure that it operates in an untrusted or non-privileged space.

Phase: Operation

Use "semantic comparators," which are mechanisms that provide semantic comparison to identify objects that might appear different but are semantically similar.

Phase: Operation

Use components that operate externally to the system to monitor the output and act as a moderator. These components are called different terms, such as supervisors or guardrails.

Phase: Build and Compilation

During model training, use an appropriate variety of good and bad examples to guide preferred outputs.

+ Relationships
Section Help This table shows the weaknesses and high level categories that are related to this weakness. These relationships are defined as ChildOf, ParentOf, MemberOf and give insight to similar items that may exist at higher and lower levels of abstraction. In addition, relationships such as PeerOf and CanAlsoBe are defined to show similar weaknesses that the user may want to explore.
+ Relevant to the view "Research Concepts" (CWE-1000)
Nature Type ID Name
ChildOf Pillar Pillar - a weakness that is the most abstract type of weakness and represents a theme for all class/base/variant weaknesses related to it. A Pillar is different from a Category as a Pillar is still technically a type of weakness that describes a mistake, while a Category represents a common characteristic used to group related things. 707 Improper Neutralization
+ Modes Of Introduction
Section HelpThe different Modes of Introduction provide information about how and when this weakness may be introduced. The Phase identifies a point in the life cycle at which introduction may occur, while the Note provides a typical scenario related to introduction during the given phase.
Phase Note
Architecture and Design

Developers may rely heavily on protection mechanisms such as input filtering and model alignment, assuming they are more effective than they actually are.

Implementation

Developers may rely heavily on protection mechanisms such as input filtering and model alignment, assuming they are more effective than they actually are.

+ Applicable Platforms
Section HelpThis listing shows possible areas for which the given weakness could appear. These may be for specific named Languages, Operating Systems, Architectures, Paradigms, Technologies, or a class of such platforms. The platform is listed along with how frequently the given weakness appears for that instance.

Languages

Class: Not Language-Specific (Undetermined Prevalence)

Architectures

Class: Not Architecture-Specific (Undetermined Prevalence)

Technologies

AI/ML (Undetermined Prevalence)

Class: Not Technology-Specific (Undetermined Prevalence)

+ Observed Examples
Reference Description
chain: GUI for ChatGPT API performs input validation but does not properly "sanitize" or validate model output data (CWE-1426), leading to XSS (CWE-79).
+ Detection Methods

Dynamic Analysis with Manual Results Interpretation

Use known techniques for prompt injection and other attacks, and adjust the attacks to be more specific to the model or system.

Dynamic Analysis with Automated Results Interpretation

Use known techniques for prompt injection and other attacks, and adjust the attacks to be more specific to the model or system.

Architecture or Design Review

Review of the product design can be effective, but it works best in conjunction with dynamic analysis.
+ Memberships
Section HelpThis MemberOf Relationships table shows additional CWE Categories and Views that reference this weakness as a member. This information is often useful in understanding where a weakness fits within the context of external information sources.
Nature Type ID Name
MemberOf CategoryCategory - a CWE entry that contains a set of other entries that share a common characteristic. 1409 Comprehensive Categorization: Injection
+ Vulnerability Mapping Notes

Usage: DISCOURAGED

(this CWE ID should not be used to map to real-world vulnerabilities)

Reasons: Potential Major Changes, Frequent Misinterpretation

Rationale:

There is potential for this CWE entry to be modified in the future for further clarification as the research community continues to better understand weaknesses in this domain.

Comments:

This CWE entry is only related to "validation" of output and might be used mistakenly for other kinds of output-related weaknesses. Careful attention should be paid to whether this CWE should be used for vulnerabilities related to "prompt injection," which is an attack that works against many different weaknesses. See Maintenance Notes and Research Gaps. Analysts should closely investigate the root cause to ensure it is not ultimately due to other well-known weaknesses. The following suggestions are not comprehensive.

Suggestions:
CWE-ID Comment
CWE-77 Command Injection. Use this CWE for most cases of 'prompt injection' attacks in which additional prompts are added to input to, or output from, the model. If OS command injection, consider CWE-78.
CWE-94 Code Injection. Use this CWE for cases in which output from genAI components is directly fed into components that parse and execute code.
CWE-116 Improper Encoding or Escaping of Output. Use this CWE when the product is expected to encode or escape genAI outputs.
+ Notes

Research Gap

This entry is related to AI/ML, which is not well understood from a weakness perspective. Typically, for new/emerging technologies including AI/ML, early vulnerability discovery and research does not focus on root cause analysis (i.e., weakness identification). For AI/ML, the recent focus has been on attacks and exploitation methods, technical impacts, and mitigations. As a result, closer research or focused efforts by SMEs is necessary to understand the underlying weaknesses. Diverse and dynamic terminology and rapidly-evolving technology further complicate understanding. Finally, there might not be enough real-world examples with sufficient details from which weakness patterns may be discovered. For example, many real-world vulnerabilities related to "prompt injection" appear to be related to typical injection-style attacks in which the only difference is that the "input" to the vulnerable component comes from model output instead of direct adversary input, similar to "second-order SQL injection" attacks.

Maintenance

This entry was created by members of the CWE AI Working Group during June and July 2024. The CWE Project Lead, CWE Technical Lead, AI WG co-chairs, and many WG members decided that for purposes of timeliness, it would be more helpful to the CWE community to publish the new entry in CWE 4.15 quickly and add to it in subsequent versions.
+ References
[REF-1441] OWASP. "LLM02: Insecure Output Handling". 2024-03-21. <https://genai.owasp.org/llmrisk/llm02-insecure-output-handling/>. URL validated: 2024-07-11.
[REF-1442] Cohere and Guardrails AI. "Validating Outputs". 2023-09-13. <https://cohere.com/blog/validating-llm-outputs>. URL validated: 2024-07-11.
[REF-1443] Traian Rebedea, Razvan Dinu, Makesh Sreedhar, Christopher Parisien and Jonathan Cohen. "NeMo Guardrails: A Toolkit for Controllable and Safe LLM Applications with Programmable Rails". 2023-12. <https://aclanthology.org/2023.emnlp-demo.40/>. URL validated: 2024-07-11.
[REF-1444] Snyk. "Insecure output handling in LLMs". <https://learn.snyk.io/lesson/insecure-input-handling/>. URL validated: 2024-07-11.
[REF-1445] Yi Dong, Ronghui Mu, Gaojie Jin, Yi Qi, Jinwei Hu, Xingyu Zhao, Jie Meng, Wenjie Ruan and Xiaowei Huang. "Building Guardrails for Large Language Models". 2024-05-29. <https://arxiv.org/pdf/2402.01822>. URL validated: 2024-07-11.
+ Content History
+ Submissions
Submission Date Submitter Organization
2024-07-02
(CWE 4.15, 2024-07-16)
Members of the CWE AI WG CWE Artificial Intelligence (AI) Working Group (WG)
Page Last Updated: November 19, 2024