CWE-1427: Improper Neutralization of Input Used for LLM Prompting
CWE版本: 4.18
更新日期: 2025-09-09
弱点描述
The product uses externally-provided data to build prompts provided to large language models (LLMs), but the way these prompts are constructed causes the LLM to fail to distinguish between user-supplied inputs and developer provided system directives.
常见后果
影响范围: Confidentiality Integrity Availability
技术影响: Execute Unauthorized Code or Commands Varies by Context
影响范围: Confidentiality
技术影响: Read Application Data
影响范围: Integrity
技术影响: Modify Application Data Execute Unauthorized Code or Commands
影响范围: Access Control
技术影响: Read Application Data Modify Application Data Gain Privileges or Assume Identity
潜在缓解措施
阶段: Architecture and Design
有效性: High
阶段: Implementation
有效性: Moderate
阶段: Architecture and Design
有效性: High
阶段: Implementation
阶段: Installation Operation
阶段: System Configuration
检测方法
方法: Dynamic Analysis with Manual Results Interpretation
方法: Dynamic Analysis with Automated Results Interpretation
方法: Architecture or Design Review
观察示例
参考: CVE-2023-32786
Chain: LLM integration framework has prompt injection (CWE-1427) that allows an attacker to force the service to retrieve data from an arbitrary URL, essentially providing SSRF (CWE-918) and potentially injecting content into downstream tasks.
参考: CVE-2024-5184
ML-based email analysis product uses an API service that allows a malicious user to inject a direct prompt and take over the service logic, forcing it to leak the standard hard-coded system prompts and/or execute unwanted prompts to leak sensitive data.
参考: CVE-2024-5565
Chain: library for generating SQL via LLMs using RAG uses a prompt function to present the user with visualized results, allowing altering of the prompt using prompt injection (CWE-1427) to run arbitrary Python code (CWE-94) instead of the intended visualization code.
引入模式
| 阶段 | 说明 |
|---|---|
| Architecture and Design | - |
| Implementation | - |
| Implementation | - |
| System Configuration | - |
| Integration | - |
| Bundling | - |