note how the test file is partially annotated
and those annotations can now be expressed
In this particular test file, absolute line numbers
might have been better than relative ones.
We might remove line numbers altogether,
but should check more querries to see how it looks.
I don't see the value from this, so just going to outright delete it.
(it actually stayed alive for quite some time in the original git history,
but never seemed to be that useful.)
The selected edges is covered by `NormalDataflowTest.ql` now... and
reading the test-output changes in `edges` is just going to make commits
larger while not providing any real value.
I initially created this as a dataflow test, but then realized it could
just be an ESSA test. I cound't find any existing ESSA tests though :|
so created a new dir for it.
A slightly complicated test setup. I wanted to both make sure I captured
the semantics of Python and also the fact that the kinds of global flow
we expect to see are indeed present.
The code is executable, and prints out both when the execution reaches
certain files, and also what values are assigned to the various
attributes that are referenced throughout the program. These values are
validated in the test as well.
My original version used introspection to avoid referencing attributes
directly (thus enabling better error diagnostics), but unfortunately
that made it so that the model couldn't follow what was going on.
The current setup is a bit clunky (and Python's scoping rules makes it
especially so -- cf. the explicit calls to `globals` and `locals`), but
I think it does the job okay.
Note: I kept the modeling using the old approach with type-trackers
instead of `DataFlow::MethodCallNode`.
I would like a meta query for DCA to show sinks before doing this, so I
can be absolutely sure we don't loose out on any important sinks on
this... so will postpone this work to a small one-off task (added to my
todo list).
I realized the modeling was done in a non-recommended way, so I changed
the modeling. It was very nice that I could use API graphs for the flask
part, and a little sad when I couldn't for Django/Tornado.
It's really hard to audit that this is all good.. I tried my best with
`icdiff` though -- and there is a problem with
ql/src/experimental/Security/CWE-348/ClientSuppliedIpUsedInSecurityCheck.ql
that needs to be fixed in the next commit