Previously, there was a bug where quick eval queries would crash when
the eval snippet is in a library file.
The problem was that the `codeql resolve queries` command fails when
passed a library file. The fix is to avoid passing the library file at
all. Instead, pass the directory. This is safe because the resolve
queries command only needs to know which query pack the file is
contained in. Passing in the parent directory is the same as passing in
a file in this particular case.
Currently `resolve ml-models` only supports queryspecs, i.e. .ql, .qls,
directory, and query pack specifications. Therefore quick evaluation within
a library isn't
supported.
In order to run our cli-integration tests, we're required to have a
local copy of the codeql CLI repo. We can then run the tests by running
the `Launch Integration Tests - With CLI` task from inside VS Code.
(See CONTRIBUTING.md for details.)
If we don't have the CLI repo cloned locally or we're not pointing to it
in `launch.json`, we don't get a clear indication of what the problem is.
The tests will still attempt to run.
Let's fail fast instead and add an actionable error message to the output.
The controller repo is set via the `codeQL.variantAnalysis.controllerRepo`
setting in VSCode.
While we have validation to check that the repo is not null and the
format of the controller repo is correct: `<owner>/<repo>`, we still
allow you to provide a non-existent repo (e.g. a mispelled one).
When the MRVA request is sent over to the API, it will verify that the
repo exists and return a very generic "Not Found" response.
This will then be logged out in the "Output" tab for VSCode.
We'd like to give users a better indication of what has gone wrong in
this case so we're making the error message more verbose.
Co-authored-by: Charis Kyriakou <charisk@github.com>
Co-authored-by: Shati Patel <shati-patel@github.com>
How did this ever work? It was using an old variant of the
qlpack name.
Also, this commit makes the unhandledRejection handler less
verbose. This gets hit when the tests end and there is a cancellation.
this is not an error.
For an inexplicable reason, the first time the selection
occurs, the value is incorrect. We often miss this error
in our tests if the expectation is reached before the
selection changed event fires.
It seems that the _second_ time the selection changed
event fires, the value is correct.
This change ensures we wait for the second selection change.
And we avoid running expectations until then.e