On `keras-team/keras`, this was producing ~200 million intermediate
tuples in order to produce a total of ... 2 tuples.
After the refactor, max intermediate tuple count is ~80k for the
charpred (and 4 for the new helper predicate).
These were causing the repo `gufolabs/noc` to spend ~30 seconds
evaluating `ControlFlowNode.strictlyDominates`. Just in case, I added
`overlay[caller] to the other instances of `pragma[inline]` as well.
Uses the same trick as for `ExtractedArgumentNode`, wherein we postpone
the global restriction on the charpred to instead be in the `argumentOf`
predicate (which is global anyway).
In addition to this, we also converted `CapturedVariablesArgumentNode`
into a proper synthetic node, and added an explicit post-update node for
it. These nodes just act as wrappers for the function part of call
nodes. Thus, to make them work with the variable capture machinery, we
simply map them to the closure node for the corresponding control-flow
or post-update node.
Fixes the test failures that arose from making `ExtractedArgumentNode`
local.
For the consistency checks, we now explicitly exclude the
`ExtractedArgumentNode`s (now much more plentiful due to the
overapproximation) that don't have a corresponding `getCallArg` tuple.
For various queries/tests using `instanceof ArgumentNode`, we instead us
`isArgumentNode`, which explicitly filters out the ones for which
`isArgumentOf` doesn't hold (which, again, is the case for most of the
nodes in the overapproximation).
Explicitly adds a bunch of nodes that were previously (using a global
analysis) identified as `ExtractedArgumentNode`s. These are then
subsequently filtered out in `argumentOf` (which is global) by putting
the call to `getCallArg` there instead of in the charpred.
With `ModuleVariableNode`s now appearing for _all_ global variables (not
just the ones that actually seem to be used), some of the tests changed
a bit. Mostly this was in the form of new flow (because of new nodes
that popped into existence). For some inline expectation tests, I opted
to instead exclude these results, as there was no suitable location to
annotate. For the normal tests, I just accepted the output (after having
vetted it carefully, of course).
This pull request introduces a new CodeQL query for detecting prompt injection vulnerabilities in Python code targeting AI prompting APIs such as agents and openai. The changes includes a new experimental query, new taint flow and type models, a customizable dataflow configuration, documentation, and comprehensive test coverage.