python: Add query for prompt injection

This pull request introduces a new CodeQL query for detecting prompt injection vulnerabilities in Python code targeting AI prompting APIs such as agents and openai. The changes includes a new experimental query, new taint flow and type models, a customizable dataflow configuration, documentation, and comprehensive test coverage.
This commit is contained in:
yoff
2026-01-29 23:47:52 +01:00
parent 34800d1519
commit e7a0fc7140
17 changed files with 519 additions and 1 deletions

View File

@@ -0,0 +1,5 @@
---
category: minorAnalysis
---
* Added experimental query `py/prompt-injection` to detect potential prompt injection vulnerabilities in code using LLMs.
* Added taint flow model and type model for `agents` and `openai` modules.

View File

@@ -0,0 +1,6 @@
extensions:
- addsTo:
pack: codeql/python-all
extensible: sinkModel
data:
- ['agents', 'Member[Agent].Argument[instructions:]', 'prompt-injection']

View File

@@ -0,0 +1,12 @@
extensions:
- addsTo:
pack: codeql/python-all
extensible: sinkModel
data:
- ['OpenAI', 'Member[beta].Member[assistants].Member[create].Argument[instructions:]', 'prompt-injection']
- addsTo:
pack: codeql/python-all
extensible: typeModel
data:
- ['OpenAI', 'openai', 'Member[OpenAI,AsyncOpenAI,AzureOpenAI].ReturnValue']