This switches the request to the GitHub API for listing CodeQL databases
from a custom request to the Octokit REST API. This allows us to be more
type-safe without introducing our own types.
The update to `@octokit/openapi-types` was necessary to have access to
the `commit_oid` field.
This adds a source property to the database item to store the source of
the database, for example GitHub or an internet URL. This will be used
to automatically check for updates to GitHub-downloaded databases in the
future.
This fixes the contextual queries when you are not in a workspace with
the submodule and do not have any downloaded packs in the package cache.
In that case, the contextual queries would fail because they weren't
able to determine which pack belonged to the database.
This fixes it by downloading the `codeql/${language}-all` pack in case
no dbscheme is found for the database. After the download is complete,
it will return the expected value for the qlpacks. This should work in
almost all cases (at least for standard languages).
Ruby keyword arguments are represented differently than positional
arguments in the MaD format: they are represented as `Method[key:]`. The
framework endpoints query also returns the name as `key:`, so we can
detect these and format them as such.
This moves the creation of possible method argument options from the
view to the languages. This allows differentiating between the
languages, for example by using `Argument[self]` for Ruby instead of
`Argument[this]`.
This adds support for modeling types. A MaD language can now optionally
define a `type` predicate. This allows internally propagating these
models. The UI will now simply show a label "type" for type models
without any way to edit these.
This removes the `disabled` prop from the `Dropdown` component. This is
already included in the default HTML props of the `select` component,
so it's not necessary to add it again.
This prevents the creation of duplicate query pack names when creating a
query in the following ways:
- When you have selected a folder, the query pack name will include the
name of the folder. This should prevent duplicate query pack names
when creating queries in different folders.
- When the folder name includes `codeql` or `queries`, we will not
add `codeql-extra-queries-` since that would be redundant.
- After generating the query pack name, we will resolve all qlpacks and
check if one with this name already exists. If it does, we will start
adding an index to the name until we find a unique name.
This will change the skeleton query wizard to detect existing query
packs when creating a skeleton query. This allows the user to create a
query in an existing query pack that is not named
`codeql-custom-queries-{language}`.
This fixes a bug where the method row would not scroll into view when
revealing a method. The problem was that the `DataGridRow` component
on which the `ref` was set is a `display: contents` element, which
does not have a visual representation in the DOM. Therefore, it wasn't
possible to scroll the method row into view. This fixes it by moving
the ref to the `DataGridCell` component of the first column, which is
a normal element.
We shouldn't be directly using the `extLogger` if we have access to the
app logger (either directly or by passing it in as a parameter). This
removes all imports of `extLogger` from the model editor directory.
This adds the ability to generate Ruby models from a database. It uses
the `GenerateModel.ql` query to do this. The query will essentially
return data in the data extensions format, so this will just parse it
and return it as `ModeledMethod` objects.
This adds the current version of the queries for Ruby to the model
editor included queries. This makes it work without needing to check out
a separate branch of the CodeQL repository/submodule. I've removed most
commented out code from the queries, but the other parts should match.
This only makes Ruby available in the model editor when the following
is set in the settings.json (workspace or user) file:
```json5
{
"codeQL.model.enableRuby": true,
}
```
Make the minimum changes necessary for prototype Ruby support in the
model editor.
This consists of:
- Reading/writing modelled methods from/to data extensions in the
dynamic languages format
- Special-casing Ruby in a few places where Java/C# was previously
assumed.
This will use the MaD's definition of a method signature when decoding
BQRS files. This will allow us to change the method signature definition
for dynamic languages.
Before, if you had selected a folder or file within for example
`codeql-custom-queries-java` and selected `java` as the language, it
would create a nested folder within `codeql-custom-queries-java` with
the name `codeql-custom-queries-java`. This is unexpected for the user,
who would expect a new query to be created within
`codeql-custom-queries-java`. This fixes that by checking for this
specific condition. It does not fix it for all scenarios, such as where
the selected file/folder is nested multiple levels deep within the
`codeql-custom-queries-java` folder.
This will change the behavior of the "Create new query" command to
create the new query in the same folder as the first selected item in
the queries panel. If no items are selected, the behavior is the same
as before.
I've used events to communicate the selection from the queries panel to
the local queries module. This is some more code and some extra
complexity, but it ensures that we don't have a dependency from the
local queries module to the queries panel module. This makes testing
easier.
This changes the skeleton query wizard to not prompt for database
download after creating a query by default. Instead, it will show a
message with a button to download a database which will launch the same
prompt.
This will change the "Create new query" command to use the selected
language when creating a new query. If no language is selected, it will
still prompt the user to pick a language.
When calling for example `showAndLogExceptionWithTelemetry`, the stack
trace would be sent to Application Insights, but there was no way to
see the stack trace from within VS Code. This will add the stack trace
to the log by returning it from `fullMessageWithStack` and using it in
the appropriate places.
It is possible to open the model editor without opening a folder, but
this gave an unhelpful error message. This commit adds a more helpful
error message.
When a local query fails (for example, if it is cancelled), it may still
have an evaluation log. We weren't generating evaluation log summaries
in these cases, so the options to view the summary text and to use the
evaluation log viewer would not be available. This fixes it by also
trying to generate the summary in the case of a failed query.
This will ensure that when "Show Evaluator Log (Raw JSON)" is used on a
cancelled query history item, we will still show it if it exists. This
changes the error messages on other cases to be more specific.
This will add the `QueryOutputDir` to the `InitialQueryInfo` and
populate it when creating a local query history item. This will allow us
to open the results directory or show the evaluator log without a
completed query.
This changes the usage data provider tree items to keep a reference to
the method and usage instead of only including their properties in the
tree item. This makes it easier to find the original method and usage
when revealing an item in the tree. It also removes the `getParent` call
in `getTreeItem`.
The main reason for this fix is to ensure
`codeQLModelEditor.jumpToMethod` gets the correct `usage` argument.
It received the tree item before, but now we can actually pass the
usage that was clicked on.
When the GitHub API returns an error for a missing default branch, we
will now show a custom error message. This custom error message includes
a link to the page to create the branch. The error is detected using the
`errors` field on the response that is now being returned.
This makes it possible to decode source maps containing references to
code that is not part of the extension. If it finds any such references,
it will simply not decode the source map and use the original stack
trace instead.
`logFileLocation` was not set after a query finishes running. I don't
know when this bug was introduced. I think it goes as far back as
the refactor to remove the old query server.
This creates new tree item types for methods and usages such that these
can contain references to their parent and children. This allows us to
easily find the parent of a usage and to find the children of a method.
This removes an expensive `find` call in `getParent`.
When opening a library group in the model editor, unmodeled methods
would always be marked as unsaved, even if there were no changes. This
was because the `ModelKindDropdown` component did not properly take into
account that the `kind` for an unmodeled method should be an empty
string. It would always try setting it to `undefined`, which would cause
the method to be marked as unsaved. This fixes it by checking if there
are valid kinds before setting the kind to the first one.
This improves the immutability of the modeling store state by using
TypeScript's readonly types to ensure that state can only be modified
from within the modeling store or when it's copied. This mostly consists
of adding `readonly` to properties and arrays, but this also adds a
`DeepReadonly` type to use in `postMessage` arguments to ensure that
readonly objects can be passed in. `postMessage` will never modify the
objects, so this is safe.
The purpose of this change is to add a command that clears the cache except for predicates marked `cached`.
In contrast, the existing "VSCode: Clear Cache" command clears everything (`--mode=brutal`).
This calls into the query server's `evaluation/trimCache` method;
however, its existing behaviour is to do a database cleanup with `--mode=gentle`.
This is not well documented, and `--mode=normal` would give the desired behaviour.
Accordingly, this approach is dependent on separately changing the backend behaviour to `--mode=normal`.
Other possible amendments to this commit would be to not touch the legacy client
(replacing required methods by failing promises, since the legacy server is fully deprecated already),
or to have less duplication (by introducing more arguments — however,
I'm applying the rule of thumb that >3 copy-pastes are required for the introduction of a deduplicating abstraction).
This fixes a bug where the validation of modeled methods would not
consider the kind of the modeled method, and would therefore give an
error when there was e.g. a neutral sink and a non-neutral summary.
When using the "CodeQL: Install pack dependencies" command, we would
show packs that are located in the package cache or distribution. Since
there are no dependencies to install for these packs, these options are
not useful.
This will filter out any packs that are not in the workspace folders
when showing the quick pick. This should exclude most packs if you are
in a workspace without the `codeql` submodule and should be a lot more
intuitive in those cases. If you are in a workspace with the `codeql`
submodule, it will still show all the packs.
This will change the add button in the method modeling panel to only be
disabled if there is exactly 1 unmodeled method and there are no
unmodeled methods. This should be more intuitive for users since they
are able to see in 1 screen that there is an unmodeled method.
This will respect the user's `saveBeforeStart` setting when running a
variant analysis. This re-uses the existing `saveBeforeStart` function
that is used when running local queries. The default behavior if the
setting is not set is to save all open named files.
This sorts the methods in the methods usages panel the same as in the
model editor. Since this is dependent on the mode, we need to keep track
of the mode in the modeling store, so this also adds a mode field to the
db state.
This fixes three bugs related to the modeling store and view states:
- In the model editor view, when `setModeledMethods` was called, it
would do it on the active database, instead of the database that the
view was showing. This should not result in any visible bugs since the
active database is always the one that is being shown (in theory), but
I can imagine that it could cause issues if showing multiple model
editors next to each other.
- In the method modeling panel, the "reveal in editor" button would
always show the already active model editor. Therefore, if you had
multiple open and were still viewing the method of the first one, it
would always show the second one.
- In the method modeling panel, the same bug would cause the incorrect
modeled methods to be updated.
When selecting a method that has no modeled methods, the modeling state
would not contain an entry for the method signature. This would cause
the `modeledMethods` to be `undefined`, which is not allowed according
to its type.
This change sets the `fullMessage` of the
`showAndLogExceptionWithTelemetry` to include the stack trace. This
makes it possible to find the source of the error rather than only
knowing that a specific error occurred. If the error does not have a
stack trace (which should be rare) the message will be the same as
before.
This converts all remaining extension host code to handle multiple
models per method. The only place where we're using the legacy format
is in the webview and in the boundary between the webview and the
extension host.
This switches all places where we're retrieving some model configuration
to use the `ModelConfig` or `ModelConfigListener` types. This makes it
much easier to mock these settings in tests.
This also adds a listener to the `ModelEditorView` to send the new view
state when any of the settings is changed. This should make it easier
to test settings changes in the model editor without having to re-open
the model editor.
This updates the method modeling panel's view state when the
`codeQL.model.showMultipleModels` setting changes. This will ensure that
the setting updates without needing to restart VS Code since this view
is much harder to restart than the model editor.
This adds a view state to the method modeling panel similar to the
model editor. This will be used to send the state of the show multiple
models feature flag to the webview so this can be used to selectively
show/hide components in the method modeling panel.
This will change the input/output types for modeled methods in the
`modeled-method-fs.ts` file to take in multiple models per method. This
removes the need for conversion functions between this file and
`yaml.ts` files. Instead, the conversion functions are done when calling
any functions defined in `modeled-method-fs.ts` files.
This changes YAML parsing/creating functions for the model editor to
handle multiple models per method. The changes in the actual YAML
handling are fairly small because the format itself already supports
multiple models per method.
I've introduced a few helper functions to convert between the old and
new types. This should only be necessary while we're in the middle of
the transition to the new types and can be removed later. For now,
we'll just take the first model in the array when converting from the
new to the old type. This is a change in the behavior since currently
we always take the last model in the array but this behavior is
undocumented and unsupported, so it should be fine to change it.
Use it for `MultiCancellationToken`. And ensure that adding a
cancellation requested listener to the `MultiCancellationToken` will
forward any cancellation requests to all constituent tokens.
This will reveal a method for which "Review in editor" is clicked in the
model editor view: it will expand the group (library/package) in which
the method is located, scroll to the method, and highlight the method.
If the user clicks anywhere on the page, the highlight will be removed,
but the group will remain expanded.
This will call a method on the correct model editor view when the user
clicks on "Review in editor". This does not yet do anything to the view;
this will be added in a follow-up commit.
This is used for registering which model editor views are currently
active. This will be used to determine which view to send the "reveal
method" command to. It can also be used in the future to limit the
number of instances of the model editor that can be opened for a
database.
This uses the same pattern as variant analyses with a separate interface
for the view to avoid having circular dependencies.
This is a cancellation token that cancels when any of its constituent
cancellation tokens are cancelled.
This token is used to fix a bug in Find Definitions. Previously, when
clicking `CTRL` (or `CMD` on macs) inside a source file in an archive
and hovering over a token, this will automatically invoke the
definitions finder (in preparation for navigating to the definition).
The only way to cancel is to move down to the message popup and click
cancel there.
However, this is a bug. What _should_ happen is that if a user moves
their mouse away from the token, the operation should cancel.
The underlying problem is that the extension was only listening to the
cancellation token from inside `getLocationsForUriString` the
cancellation token used by the Language Server protocol to cancel
operations in flight was being ignored.
This fix will ensure we are listening to _both_ cancellation tokens
and cancel the query if either are cancelled.
This will change the `AbstractWebview` to dispose its tracked objects
(using `this.push`) when the panel is disposed rather than when the
view is disposed. This makes `this.push` actually useful in a view.
Before, the objects would only get disposed when the extension itself
was disposed.
Previously, if there was an explicit restart of the query server (eg-
by changing a configuration setting), then the query server process
would be started twice: once by the `close` handler and once by the
restart command.
By adding the `removeAllListeners` to the dispose method, we ensure that
when the query server shuts down gracefully, there won't be a `close`
listener that is going to restart it a second time if there is a
different way of restarting it.
It seems like Node's native `fetch` implementation isn't quite working
right with Octokit and MSW. This switches to using `node-fetch` like
we're already doing for all other requests (e.g. downloading databases).
This updates the model editor queries to the version that will be merged
into the CodeQL repository. There are some slight changes to the output
format, so we slightly need to change the BQRS decoding of those
queries.
The queries themselves were copied from the two PRs with some minor
additions at the end since these were changes in core CodeQL library
files.
After the upgrade to the correct types for js-yaml, the return type
of `load` is correctly typed as `unknown`. This means that we can't
use the return value directly, but need to validate it first.
This adds such validation by generating a JSON schema for a newly
created typed. The JSON schema generation is very similar to how we do
it in https://github.com/github/codeql-variant-analysis-action.
Before this change and starting with CLI v2.14.3, if you wanted to run
a variant analysis query and the pack it is contained in has at least
one query that contains an extensible predicate, this would be an error.
The reason is that v2.14.3 introduced deep validation for data
extensions. We are not copying the query that contains an extensible
predicate to the synthetic pack we are generating. This means that deep
validation will fail because there will be extensions that target the
missing extensible predicate.
This change avoids the problem by copying any query files that contain
extensible predicates to the synthetic pack. It uses the internal
`generate extensible-predicate-metadata` command to discover which
query files contain extensible predicates and copies them.
* Don't download artifacts for repos with no results
* Remove getVariantAnalysisRepoResult requests for repos with no results
* Run fix-scenario-file-numbering for mrva-problem-query-success scenario
* Update CHANGELOG
When running tests using `--runTestsByPath <some-path>`, the tests were
being run twice because jest-runner-vscode
[resolves test paths](0c98dc12ad/packages/jest-runner-vscode/src/runner.ts (L57-L66)),
while the original arguments were also still passed to Jest. So, the
arguments Jest would receive would look something like
`test/vscode-tests/no-workspace/databases/local-databases/locations.test.ts /Users/koesie10/github/vscode-codeql/extensions/ql-vscode/test/vscode-tests/no-workspace/databases/local-databases/locations.test.ts`
which would cause Jest to run the tests twice. This fixes this by
resolving the paths to their absolute paths, and then removing any
duplicates.
This commit fixes a bug in the extension where the qhelp preview was not
being refreshed after the first time the preview was rendered. The
reason is that vscode will not refresh the markdown preview unless the
original file with the markdown in it is already open in the editor.
This fix will briefly open the raw markdown, refresh the preview and
close the raw markdown.
An upcoming change in the CLI will require that the extensible
predicates that are targeted by a data extension needs to be available
in order for the `resolve extensions` command to succeed.
There are a handful of tests that are failing with this new CLI. This
change will update the tests so that the `codeql/java-all` pack is
available in the tests and ensures they pass.
pick 2db42e3e Pull out createDataExtensionYamls into yaml.ts
pick 52f7cac0 Move saveModeledMethods to a separate file
pick ba27230e Move loadModeledMethods to a separate file
pick c512a11e Split out listModelFiles from loadModeledMethods
pick 752cf8ab Add some tests of listModelFiles
This removes the call type as shown for an unmodelable method. We still
need to decide how to show this information, so this may be added back
in later.
* Move code-tour.ts to /code-tour
* Move qlpack-generator.ts to /local-queries
* Move query-status.ts to /query-history
* Move skeleton-query-wizard.ts to /local-queries
* Add version constraint for Check errors
* Refactor parts of the ideserver out of extension.ts
* Give visibility information to the ide-server.
This allows it to report errors on visible files
eagerly.
This adds a refresh button to the data extensions editor when the
framework mode feature flag is enabled. If you are using framework mode,
you can have multiple tabs of the data extensions editor open in which
you are modeling the library separately from the application. When you
save the library in framework mode, the application mode will not
refresh and show that these calls have been modeled. Rather than using
apply, which might also save all modeled methods, you can now use the
refresh button to refresh the external API usages and whether they are
supported.
This excludes methods defined in tests in framework mode, significantly
cutting down on the number of methods shown that would need to be
modeled.
For C#, this just checks that the file is not a test file, as defined
by the QL library.
For Java, this makes a copy of the internal
[`ModelExclusions.qll`](249f9f863d/java/ql/lib/semmle/code/java/dataflow/internal/ModelExclusions.qll)
file to avoid the use of internal modules. This module will tell us
whether a method is "interesting" to model or not. Not all of the checks
in this module need to happen for framework mode, but these checks might
be useful for telling a user whether a method is interesting to model
in application mode.
This makes the method name and parameters in framework mode a link to
the definition of the method. In framework mode, the `usages` contains
1 element, which is the location of the definition of the method.
Therefore, we can simply use `jumpToUsage` to jump to the definition.
Similar to https://github.com/github/vscode-codeql/pull/2553, this
changes the C# query to correctly report the name of nested types. I
couldn't find a `nestedName` method for C#, so this adds one in the
`AutomodelVsCode` library.
C# seems to use `+` as a separator for nested types, as reported by
`getQualifiedName()`:
```
GitHub.Nested.MyFirstClass+NestedClass
```
The `getApiName()` will now report:
```
GitHub.Nested#MyFirstClass+NestedClass.Test()
```
Adds a command to run all queries in a certain folder.
This uses the existing `runQueries` command, which lets you run multiple queries against the selected local database.
We don't have a corresponding command for running multiple variant analysis queries, so I haven't implemented that.
This changes the Java `CallableMethod.getApiName()` to use `nestedName`
instead of `getSourceDeclaration`. `getSourceDeclaration` would return a
`RefType`, on which the `toString()` method returns its `getName`().
However, for nested types this wouldn't work and wouldn't include the
enclosing type. This fixes it by using `nestedName` which matches the
method that is also used for determining whether a type matches an
extensible predicate.
* Add version constraint for quick-eval-count
* Add quick eval count context.
* Add support for running quick-eval-count from the command pallete
* Adjust name for quick-eval-count-queries
* Add changenote for quick-eval-count.
* QuickEval:Address review comments
* Fix rebase conflict in changelog
This refactors the data extensions editor queries to use a new
`AutomodelVsCode` module. This module is based on the `ExternalApi`
module, but is more general and can be used for retrieving public
methods from the source as well. The actual conditions are now in the
queries themselves.
This reduces the duplicated module in the framework mode query and will
mean that when we update the `ExternalApi` module, we will just have to
port it to the `AutomodelVsCode` module, and not to the `ExternalApi`
and a separate framework mode query.
This changes the sorting of the methods sent to LLM to match the order
shown in the data extensions editor. This will ensure that the methods
which are shown first in the data extensions editor are also modeled
first.
This will allow users to override the default directory in which
extension packs are created by using the
`codeQL.dataExtensions.extensionsDirectory` setting. This setting can be
overriden per language, so the user could create the following
configuration to set the extension pack setting for Java only:
```json
"[java]": {
"codeQL.dataExtensions.extensionsDirectory": "/Users/user/github/vscode-codeql-starter/codeql-custom-queries-java",
}
```
This will remove the user input for a model file and will instead create
1 model file per library (JAR/DLL). The model filename will be based on
the JAR/DLL name, but will remove the version number and the file
extension. It will also normalize the name.
These files will be created automatically, and the editor now also reads
in all files contained in an extension pack to read the modeled methods.
This could result in duplicates if the user has created a different file
to contain the same modeled methods, but this is an edge-case that we're
explicitly not handling.
The `logging` part of `common` was exported in `common/index.ts` and
could be imported by importing `common`. I don't think this makes a lot
of sense, so I removed it.
The `common/logging/index.ts` also contained exports of the
`common/logging/vscode` directory, which means that importing
`common/logging` automatically brings in the `vscode` module. This
removes that export, so now there are two separate imports needed for
importing the common part and the `vscode` specific part. This should
make it easier to keep them separate and be more explicit about what
you're importing.
This will change how extension packs are named in the data extensions
editor. Before, the user had to pick a workspace folder and a name for
the extension pack. Now, the workspace folder will be picked
automatically if we can detect it (i.e. it follows the naming structure
we expect), or the user will still need to select it. The extension pack
name is always auto-generated based on the database name and the
database language.
This adds a new `codeQL.dataExtensions.disableAutoNameExtensionPack`
setting to disable this behavior while we are still working on changing
how the data extensions editor works.
This adds a new library column to the data extensions editor containing
the JAR or DLL file the method is defined in. This will be used to group
methods by library in the future. For now, it just shows in a column.
See https://github.com/github/vscode-codeql/pull/2490#discussion_r1226437598
for more explanation. This will make the class more useful for future usecases
where we don't want the behaviour of only calling update when there isn't
another refresh scheduled. I also think it doesn't negatively affect other
users such as the query test discovery. The effect should be that we'll see
more updates to the UI. These updates will get overwritten quickly, but they
are all genuine snapshots of the filesystem at the point the discovery process
ran, so they aren't incorrect, or aren't more incorrect than continuing to show
the old state before any discovery ran.
To increase the use of the `app` logger, this replaces the direct use of
`extLogger` by the `app.logger` where possible. This should not change
the behavior since the `extLogger` is the logger used by the `app`.
This moves the `showAndLog` family of functions to the `common/logging`
directory. It explicitly moves the `showAndLogExceptionWithTelemetry`
function to the `common/vscode/logging.ts` file because it still has a
dependency on the `telemetryListener`, which depends on the `vscode`
module.
It seems like some CLI tests are hanging and only completing after 6
hours when they run into the default timeout. This updates the timeout
to 30 minutes. All CLI tests should complete in 30 minutes, so this
should ensure that they are cancelled when they are stuck.
This moves the Webview HTML generation used by `AbstractWebview` out of
`interface-utils.ts` and into a new file `webview-html.ts` in the
`common/vscode` directory.
When packaging an extension pack, unscoped extension pack names are not
allowed and calling `codeql pack bundle` will fail with an error. This
command will be called when running a variant analysis, so these packs
will not work for a variant analysis.
To improve the user experience, we now only allow scoped extension pack
names. This means that the user will now have to enter a scope when
creating a new extension pack.
This option was used to ignore source archives for `.testproj`
databases. It is only set to `true` or `false` when creating the
database and could not be changed, so I don't think we need this option.
It can simply be derived from the database URI. This simplifies handling
of databases a bit.
This moves the `refresh` method from `DatabaseItem` to `DatabaseManager`
and makes it private. This makes the `DatabaseItem` interface smaller
and more focused and ensures that `refresh` cannot be called from
outside of the `DatabaseManager`.
This will allow us to implement specific behavior on the
`DatabaseItemImpl` which is not available on the `DatabaseItem`. This
will allow us to make the surface area of the `DatabaseItem` smaller.
Generated variant analysis packs will use the original name
of the pack that the query is located in. This is to support
some future work where we do extra validation of data extensions.
If the query is not in a pack, the default name is used.
This adds a warning when the user is using an unsupported version of the
CodeQL CLI. The warning is shown once per session, and only if the
version is older than the oldest supported version.
When the user filters the repositories, the buttons should reflect that
the results are filtered and that the user is not exporting or copying
all the results. If the user has selected repositories, the buttons
should still say that they are exporting selected results.
This changes the text of the export/copy buttons on a variant analysis
when at least one repository is selected. This makes it more clear that
the user is only exporting/copying the results of the selected
repositories.
The compare view typically works by matching the results sets of
queries. It only allows the results sets of queries with identical
names to be compared.
Manually run queries use `#select` as the default result set. However,
quick eval queries use a different, generated, name. Therefore, these
two kinds of queries cannot be compared.
This commit changes the heuristics so that if there are no identical;y
named results sets, the first result set of each query that is prefixed
with `#` is used (this is the default result set).
It also makes a slightly nicer error message if there are no comparable
results sets.
The data extensions editor was always setting the `provenance` field of
MaD to `manual`. This will change the `provenance` to be either
`editor-manual` (for models which were added by the user),
`df-generated` (for models generated by the dataflow generator), and
`df-manual` (for models generated and then edited). This makes it easier
to trace the origin of a model.
This will change the data extensions editor generator to resolve the
queries based on the `modelgenerator` tag. This removes the requirement
of having a `ql` folder in the workspace.
I chose to use the path instead of the `id` for now to avoid having to
resolve the query 4 times. This also avoids the need to map the language
names to the language ID in the tag (i.e. `csharp` -> `cs`).
* Log stdout when servers are terminated with errors.
This logs the last stdout chunk (probabaly the last line) if things
went wrong. This can sometimes be useful for debugging.
It also prints the signal when killed by a signal
(rather than printing null)
* Restart/Abort the queryserver if the process dies.
This cancels any running tasks and gives a limited number of restarts.
* Update extensions/ql-vscode/src/codeql-cli/cli.ts
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
* Update extensions/ql-vscode/src/query-server/query-server-client.ts
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
---------
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
`url.parse` is deprecated and should be replaced by the WHATWG URL API,
so this makes that change. The `protocol` and `host` properties are
unchanged, so no other changes are needed.
This will make both the `askForLanguage` and `findLanguage` functions
return a `QueryLanguage` instead of a `string`. This will make it harder
to make mistakes when using these functions.
There are also some other changes with regards to `QueryLanguage` such
that we never need to use `as QueryLanguage` explicitly anymore, except
for the new `isQueryLanguage` function. The only remaining place that I
know of where we're using a `string` to represent the `QueryLanguage`
is in a database item's language, but this is harder to change and may
be relied upon by language authors.
This will suppress showing the same error message when monitoring the
variant analysis fails multiple times in a row. This is useful when
e.g. the internet is disconnected or the endpoint is non-functional for
any other reason.
This will show the error message at least once, and then only show it
if there has been a successful attempt in between or when the error
message is different. This should result in a much less noisy
experience.
For `@kind problem` queries, it seems like the BQRS contains four
result sets: `edges`, `nodes`, `subpaths`, and `#select`. The user is
probably interested in the results of `#select` since that contains the
direct result of the query. This changes the extraction of raw (BQRS)
results to always prefer `#select` over any other result set. If it
can't find a result set named `#select`, it will fall back to the first
result set in the BQRS like before.
After some bit of refactoring, the templates are no longer being
passed to the CFG viewer and this is preventing the viewer from
running.
This change fixes it.
An internal option to help library authours to run and debug
the find references and find dependencies contetextual queries
without relying on the implicit cache.
I initially started defining an enum, but I'm not able to import it in package.json,
where the list of options is defined (under `codeQL.createQuery.autogenerateQlPacks`).
This was only a problem for the new test UI, because the third-party Test Explorer extension we used before must have had this filter in its implementation.
When running "Create Query", if the user escapes out of the language quickpick we don't need to throw an error. Instead we can just show a `UserCancellationException` notice.
We were only clearing the pack cache when a pack file was modified. When
using a script to create a new pack, the pack file is created at once
without a change event firing. This will change the behavior to also
clear the pack cache when a pack file is created or deleted.
And add tests to check this.
I've had to adapt the existing `findExistingDatabaseItem` method
so receive params so that I'm better able to send it a language
and a list of database items.
Offer non-codespace users the option to configure their storage folder for skeleton packs.
Suggested here:
https://github.com/github/vscode-codeql/pull/2310#issuecomment-1507428217
At the moment we're choosing to create our skeleton packs in the first
folder in the workspace.
This is fine for the codespace template because we can control the
folder structure in that repo.
For users outside of this we'd like to offer them the option to choose
where to save their skeleton packs.
After some testing of the wizard with a single rooted workspace
(`github/code-scanning`) we discovered a general VSCode extension
bug whereby after we download a database, we don't select it.
This isn't an issue in the wizard, but it does affect us as it means
we'll generate the QL pack, download the db for you but then you won't
know that we haven't selected your database.
So let's make sure our flow works for this case by explicitly selecting
the database once it's downloaded.
We've noticed by testing that we need to set the current database item
before we call `addDatabaseSourceArchiveFolder()`. Once that method is
called, the call to `setCurrentDatabaseItem()` is ignored.
We've had to make some changes to the openDatabase() method to select
a database item by default, since most places where we call `openDatabase()`
also immediately select the item.
There is one exception [1] in the test-runner.ts file, where we set the
current database item under special conditions.
For this reason, we've made the behaviour configurable and tried to add
some descriptive naming to the params so that it's easy to understand
what the config is doing.
[1]: 4170e7f7a7/extensions/ql-vscode/src/test-runner.ts (L120-L124)
This removes the link to the model file when it does not exist. It will
still show the filename of the model file. When clicking on "Apply", it
will refresh whether the model file exists after writing the file.
This fixes an issue where we would download a database every time
we create a new QL pack. Sometimes, a database is already available
so let's always check for it, regardless of the existence of a QL
pack.
This will filter the extension packs shown to the user when selecting an
extension pack to use in the data extension editor to only include the
extension packs that are compatible with the language of the database
item.
Unfortunately, this required quite some changes to the tests to ensure
the extension packs are actually setup properly since it's now reading
the extension pack files.
This will add the extension pack name to the data extensions editor and
allow the user to click on it to go to the folder of the extension pack
in the explorer panel.
This adds the model filename to the data extensions editor and will also
allow the user to go to the model file by clicking on the filename.
This also updates the general UI to be somewhat more compact by moving
the modeled percentages to be below the header in 1 line.
When you have just created a model file using the quick picker/input
box, the data extension editor will try to read it and fail with an
error message. This adds a check to ensure the model file exists and if
it doesn't, it will not try to read in the file.
This should always be safe since the model file picker will only allow
you to select existing files.
This will not allow the user to open the data extensions editor for a
database if it is not one of the supported languages. The supported
languages is a list of `string` rather than a list of `QueryLanguage`
because a database item's language is also a `string`.
There were still some places where we were hardcoding Java in the data
extension editor. This changes these places to use the database item
language instead.
This commit addresses various test flakiness:
1. Bump timeouts for queries tests
2. Add a dispose handler to queryserver-client. This will help us during
tests because if there is a test that timesout while a query is
running, the query's progress callback won't be invoked. We will
still get a timeout error in the first test, but the second test will
not get a spurious error.
3. Handle a disposed query server in `deregisterDatabase`. This method
will remove the database from the currently running query server.
If there is no query server, then there is nothing to remove. So,
this error is safe to ignore.
4. Explicitly `end()` a connection `ServerProcess`. I'm not 100% sure if
this is necessary, but it seems like it prevents responses from being
handled and erroring out.
5. Better handling of ideServer restarts. Previously, if you quickly
called `CodeQL: Restart Query Server` twice in a row, you would get
an error from the ideServer restart. Restart fails if the server is
not already started. So, in this case just call a start.
We've made an exception to fetch the parent folder when we're
in the vscode-codeql-starter workspace.
We'd like to make this more specific so that it doesn't interfere
with other repos.
When running Create Query in the codespaces-codeql repo, it successfully
creates codeql-custom-queries-xxx as a subfolder of the first workspace
folder, and then adds a database. After the database gets added, we get
prompted with this message:
```
We've noticed you don't have a CodeQL pack available to analyze this
database. Can we set up a query pack for you?
```
which would try to create another QL pack.
Since we're no longer pushing QL packs as top level folders in the
workspace when we use the new "Create Query" flow, we also need to adapt
the original flow to take into account subfolders.
Just as a reminder, the original flow is:
- Be in the codespace template
- Download a database from GitHub
- The extension will offer to create a QL pack for the database
The new flow:
- Run the "Create Query" command
- Choose a language
- Create a QL pack
- Download a database for it
In the new flow the last step of downloading a database would trigger
the extension to offer to create a QL pack.
Let's fix this by detecting subfolders as well and exiting early.
This sets a default timeout of 3 minutes on CLI integration tests. This
is because these tests call into the CLI and execute queries, so these
are expected to take a lot longer than the default 5 seconds. This
allows us to remove all the individual `jest.setTimeout` calls with
different values from the test files.
Set the CLI version in the telemetry listener whenever the version
changes.
A few things to note here:
1. In `CliServer::getVersion()`, avoid calling `supportsPerQueryEvalLog`
directly. This avoids a recursive call to `CliServer::getVersion()`.
Currently, it's always safe to do this, but I thought that it would
be good to avoid recursion here in case we change things in the
future.
2. Now, we are sending the CLI version with all telemetry events.
This adds the external API query text to the extension directly to avoid
users having to copy the query to their local `codeql` submodule or
having to checkout a specific branch.
This is a temporary solution until the queries are stabilized. Once they
are, we will upstream these to `github/codeql` and use them like other
contextual queries.
Since this is just a temporary solution, this is not the prettiest code
and is not intended to be a long-term solution. It does not do proper
caching and will create a new temporary directory for every query run.
The performance hit of this is acceptable and expected.
When we create a skeleton query, we check whether you already have an
existing database with the same name (e.g. `github/codeql`). If we can't
find one, we also check for an existing database with the same language.
If we find one, we select it instead of downloading a new database.
Here we're filtering out databases with errors.
In the original flow for creating skeleton packs, we were starting out
by choosing a database (e.g. github/codeql) and having the extension
create the QL pack for us.
At that point, we were storing the QL pack together with the database in
the extension storage because we weren't interested in committing it to
the repo.
This means we weren't able to see it in the file explorer so in order to
make it visible, we decided to push it as a top-level folder in the
workspace.
Hindsight is 20/20.
Let's change this original flow by just creating the folder in the
workspace storage instead of the extension storage (which will make it
visible in the file explorer) and stop pushing it as an extra top level
folder in the workspace.
NB: For this flow, we exit early in the `createSkeletonPacks` method if
the folder already exists so we don't need to check this again.
This adds the CLI version to telemetry command-usage events.
Note that the CLI server is created after the telemetry listener is
created. The first few telemetry events may have a "not-set" value for
the CLI version.
This changes the kind input from a text field to a dropdown in the data
extensions editor. The supported values for each extensible predicate
are based on what is currently in-scope for the documentation. Other
kinds are not supported.
The supported kinds are now stored on the
`extensiblePredicateDefinitions` to make it easier to add new kinds in
the future.
We now use `fsPath` instead of `path`.
Note: I haven't yet fixed the tests, nor checked manually on mac/linux
Tangential change: we now use the `dirname` method, instead of manually splitting paths to get a parent folder.
The `getFirstStoragePath()` method would break on windows:
```
Path contains invalid characters: /c:/git-repo/codespaces-codeql (codeQL.createSkeletonQuery)
```
This makes sense, since we're looking to get the parent folder by splitting for `/`.
In windows, paths use `\` instead of `/`.
So let's detect the platform and add a test for this case.
To be consistent with other database item search methods, we're renaming
ours:
`digForDatabaseItem` -> `findDatabaseItemByNwo`
and
`digForDatabaseItemSameLanguage` -> `findDatabaseItemByLanguage`
At the moment, we're always deciding which database to download for the
user for an example query.
We'd like to give them a chance to change the database, so here we're
adding a step where we're showing the user a selection box with the
suggested database pre-filled.
They can choose to type in a different database before continuing the
skeleton generation process.
We'd like to select an existing database for our query, if on is
already downloaded and matches the query language.
Previously we were re-using the database if the language and name
matched (e.g. the name would be `github/codeql`).
When we try to determine the next file name for our example query,
we only look at `example<n>.ql` files.
e.g. if the files in the folder are:
- `example.ql`
- `example2.ql`
- `MyQuery.ql`
we will create an `example3.ql` file.
Previously we were counting all existing `.ql` files.
We've now added more tests and pushed the total duration over 5 seconds
for all the tests in this file.
This limitation seems to be a recent development where files with tests
that last longer than 5 seconds start failing in jest.
We're bumping the timeout limit to 40 seconds for now.
We initially defined the default database to download as one from the
`github/codeql` repo as it was convenient.
However, this repo doesn't have a lot of vulnerabilities to discover.
Let's use repos that are in our MRVA top 10 list to allow users to
write more interesting queries.
We set up the "Create Query" command with the assumption that
the first folder in the workspace is the parent folder.
This is true for the `codespaces-codeql` repo where we expect
to use this command.
However, for the `vscode-codeql-starter` repo, the top level
folders are QL packs:
- codeql-custom-queries-cpp
- codeql-custom-queries-ruby
... etc.
In order to make the command work for people using the starter
repo, we'll need to introduce a check for these QL packs when
we decide the storage path.
The end goal is to replace the starter workspace completely
with the codespaces-codeql repo, so this code can be removed
in the future when we retire the repo.
Until then, the command will need this to be able to work in
both starter workspaces.
We offer `github/codeql` as a repo to use for downloading databases
for our skeleton pack.
Once the repo is specified, the user is prompted to choose a language.
At this point, we already know what language the user wants, so let's
change the `downloadGitHubDatabase` and `convertGithubNwoToDatabaseUrl`
methods to accept a language parameter.
We check if the language is in the list of languages received in the
response. If it isn't, we still prompt the user.
This will be triggered by a "Create Query" command.
It will:
- prompt the user for a language
- create a skeleton pack based on the language chosen
- download a database for the QL pack
- open the new query file
If the skeleton pack already exists, we just create a new query file
in the existing folder.
If the database is already downloaded, we just re-use it.
We introduced this QlPackGenerator a while ago. It always creates an `example.ql` query file as part of the skeleton pack.
We'd like to set the name of the query file, since we'll allow the user to create queries multiple times in the same skeleton pack folder.
The folder will be named `codeql-custom-queries-${language}` and will first receive an `example.ql` file.
If the user then tries to create a new query for the same language, we'll just create an `example2.ql`, `example3.ql` etc. file in the existing folder.
We'll use this to check whether a database for our ql pack already exists.
While there are other methods that search for a database item by URI, we
only have a language chosen by the user and an nwo ("github/codeql").
So let's introduce a way to search for the db based on the information we
have.
We plan to ask the user to choose a language, before attempting
to download a corresponding database for them.
The functionality already exists, so let's re-use it.
This was nested in a method that included prompting the user for a
github repo.
We'd like to re-use this to download a database of our choice from
GitHub, based on which language a user chooses.
This will allow a user to create a new model file in an existing data
extension when opening the data extension editor. There is some
validation on the name of the model file, which depends on reading in
the qlpack.
This adds a pickable model filename from an existing extension pack to
the data extensions editor. This allows the user to edit one of their
existing data extensions. This does not yet add the ability to create
new extension packs and/or new model files.
This uses the `codeql resolve extensions` command to get the list of
available model files. This should be available in all CLI versions
which the data extensions editor supports.
The data extension editor was only using the default data extensions
found in the `ql` submodule to find external API calls. This will add
support for using data extensions found in the workspace.
Rather than using the `codeQL.runningQueries.useExtensionPacks` setting,
this will always include data extensions since the editor doesn't make
sense to use without data extensions. We will also forbid the user from
opening this view unless they are using a CLI which supports data
extension packs.
We were using a single-use class for generating the flow model, while we
are actually able to do it using two functions. This is more in line
with our existing codebase.
This fixes the "Webview is disposed" error which occurs when the user
closes the variant analysis webview while the extension is still
activating. We will now check whether the webview is disposed before
restoring the view.
This was pointed out by CodeQL: when calling `setState` and using
`this.props`, it may not be up-to-date because `setState` may run
concurrently. Therefore, we should use the `setState` callback variant
to ensure we get the latest props.
This refactors the code a bit to ensure we're not using `this.props`
anywhere, including in the `getResultSets` function which is called
in the `setState` callback.
When updating to React 18, we removed the loading step from the updating
of the state of the result view since React would batch the updates
anyway. However, this caused a bug where the result view would be empty
when switching between queries. This is because the result view would
retain the old selected result set name. This would not happen
previously because React would re-render the view at least once, which
would cause the result view to be unmounted and re-created.
This fixes it by resetting the selected result set if we can't find the
result set in the new result sets.
The default Storybook Babel config did not recognize the `public`
keyword in our custom errors (e.g. `ExhaustivityCheckingError`
and `RedactableError`). To fix this, we can use Storybook's V7 mode to
supply a custom Babel config. This fixes the compilation error.
See: https://storybook.js.org/docs/react/configure/babel
This adds tests for the external API query and retrieving of results. It
does not use the "real" CLI integration, but instead mocks the CLI
server and query runner.
To make mocking easier and require less type casting, I've narrowed some
of the arguments of some other functions. They now use `Pick` to only
require the properties they need.
The version of ts-jest we were using was given a warning that it did
not support TypeScript 5, even though it was working fine. The latest
version of ts-jest adds official support for TypeScript and removes the
warning.
This loads in the existing data extension YAML file for the selected
database. It only supports the filename we save it to, and will not load
it from any other data extension YAML files.
This adds the ability to save the modeled methods in the data extensions
editor to a YAML file named after the database name. It will save it to
the `ql` submodule for now. Support for data extension packs will be
added later.
This updates the view of the data extensions editor to show a table of
possible sources/sinks/flow summaries that can be edited. It's not yet
possible to save the changes or load the existing file.
This upgrades Storybook to an alpha version to ensure we can start
Storybook when using TypeScript 5.0. This is temporary until we can
upgrade to Storybook 7 or to a released version of 6.5.17.
This adds a really simple regression unit test for the results view
which checks that the results view can render a SARIF file. This is
in preparation for the upgrade to React 18 to ensure that we don't
break the basic functionality of the results view.
This will improve the error message shown when monitoring fails. Instead
of showing "Error while monitoring variant analysis: Not Found", this
will now show "Error while monitoring variant analysis "Empty block
(javascript) [29/3/2023 10:45:10]: Not Found". This should make it
easier for the user to figure out which query history item is
problematic.
We're not using the full query history item label here because that
would require access to the query history item, which we don't yet have
here. Adding it here would add a dependency on the query history, which
seems undesirable.
We were ignoring errors coming from `vsce publish` and this was causing
the workflow to succeed even when the publish failed. This will remove
the `||` and let the workflow fail if the publish fails.
The `glob` package now uses promises in version 9, so we don't need the
separate `glob-promise` package anymore.
This also updates one call site to use `cwd` instead of `join` to
avoid possible issues due to a breaking change in version 8 which treats
Windows path separators differently. By changing the `cwd`, we should
not run into these issues.
Removing the `resolutions` of `glob-parent` did not change the
`package-lock.json`, so this does not have any effect on the package
version we are using.
We introduced this change to help with reducing flakiness in CI [1].
This has a slightly different effect locally, where every failed test
will output three times.
This in turn makes it harder to read, especially when you have multiple
failing tests.
Since the original intent for this behaviour was to be used in CI, I'm
proposing we disable it when the CI env variable isn't set.
I've opted to set it for all jobs involving tests, just for consistency.
I'm happy to limit it to just the places where it's required.
[1]: https://github.com/github/vscode-codeql/pull/2059
This adds support for detecting the `CommandManager.execute` method in
the unique command use query.
This may not be the best way to implement this. There's a method
`hasUnderlyingType` on `this.getReceiver().getType()`, but I couldn't
really figure out how to get it recognize `CommandManager`. It might be
possible if we can construct the type of `CommandManager`, but this will
probably include the filepath to the `CommandManager` class, which might
not neccessarily be something we want: moving the `CommandManager` class
should not require updating the query. I'm very happy to hear other
suggestions.
The `commandRunner` name doesn't really make sense since it doesn't
"run" a command, but rather registers a command. This renames it
to `registerCommandWithErrorHandling` and moves it to the
`common/vscode` directory.
The `codeQL.checkForUpdatesToCLI` command is registered pre-activation,
and we don't really want to create the command manager before
activation, so this will just add the correct type without registering
it using the command manager.
The command manager types didn't fully support commands defined with
`Partial` because it deduced that the command function was `undefined`
when the function was not defined. However, if the command is not
present, the command registration will not be called. This fixes the
types by specifying that the command definition will never be
`undefined`.
This class seems to have been introduced at some point to reduce the
dependency on VS Code from the test UI service. However, none of its
methods are being used anymore, and by using typed commands we have
already reduced the dependency on VS Code. Therefore, we can simply
remove this class.
We attempted to specify exactly which URI we're expecting here.
However, `Uri.parse` behaves differently in the test than it does in
the code so we've inadvertently created a flakey test [1]. The URI we
generate in the test has a `scheme: 'c'` while the one in the code has
a `scheme: 'C'` property.
This only happens on windows, not ubuntu.
Let's narrow the comparison to just the path of the URI.
[1]: https://github.com/github/vscode-codeql/actions/runs/4478429334/jobs/7871178529#step:7:231
Some of the methods in the `DatabaseUI` were public because they were
used in the `extension.ts` file. We have moved these method calls into
this file, so they do not need to be public anymore. We can also get rid
of the separation between some of these methods, so I've moved them into
the function that calls them.
Now that we fixed our expectation in the previous commit, we could see we
were stubbing this to false instead of true.
So now the test is checking the right scenario.
Our expectation was quite narrow: we expect to not call an `openFolder`
command. We didn't specify any params for it, which might mean this
expectation wasn't working like it should.
Let's just check that `executeCommand` isn't called at all.
The local query commands are using a separate logger, and this is not
supported by the command manager because it is quite specific to this
extension. Therefore, we create a separate command manager which uses
a different logger to separate the commands.
We're running this at the extension start-up. We don't want it to block the extension
from completing activation, so let's swallow any errors from the code tour and output
them, instead of letting this affect the rest of the extension activation.
When opening https://github.com/github/codespaces-codeql/ in a
codespace, it's easy to miss the prompt that tells you to open the
tutorial.code-workspace file.
In fact people actively dismiss the alert to get it out of the way.
If you miss that prompt, you end up with a single-rooted workspace,
which causes various other problems.
While there is an open issue to allow VS Code to open a default
workspace [1], there doesn't seem to have been any progress on it
in the last two years.
So we're taking matters into our own hands and forcing the extension
to open the tutorial workspace, if it detects it.
This will only happen if the following three conditions are met:
- the .tours folder exists
- the tutorial.code-workspace file exists
- the CODESPACES_TEMPLATE setting hasn't been set
NB: the `CODESPACES_TEMPLATE` setting can only be found if the
tutorial.code-workspace has already been opened. So it's a good
indicator that we're in the folder, but the user has ignored the prompt.
[1]: https://github.com/microsoft/vscode-remote-release/issues/3665
The local databases UI was essentially the only class which was defining
methods using assignment to a class property rather than using function
definitions and binding them. This switches it to use function
definitions and binding, which is more consistent with the rest of the
codebase.
This change allows the `codeQL.runningQueries.useExtensionPacks`
setting to be respected when running variant analysis queries. When
set to `all`, before uploading the generated query pack, all extension
packs in the workspace will be injected as dependencies into the qlpack
file.
In the results view, `setState` was used to set some state, and then
`loadResults` was called to set some other state. However, `setState`
is asynchronous, so the `this.state` in `loadResults` was not the state
that was set before the call.
This commit fixes it by combining the two `setState` calls into one. The
logic was hard to follow, so I'm not sure if this is the correct fix.
The `shouldKeepOldResultsWhileRendering` state seems to be unnecessary
now since everything is being done in `setState` call, but I'm not sure
about that.
The version of `@primer/react` we were using didn't have the correct
types for the `ThemeProvider`. By upgrading these packages to their
latest versions, all types are correct.
This enables React strict mode which will print extra warnings to the
console when we use certain constructs incorrectly. This does not affect
production builds.
See https://beta.reactjs.org/reference/react/StrictMode
The extension Webpack config does not use `babel-loader`, so we can
remove it as a direct dependency. `babel-loader` is still included in
our `node_modules` because Storybook depends on it, but the version is
now completely managed by Storybook rather than us.
This will make the progress options passed to `withProgress` optional by
moving it to be the second argument and setting a default value for the
`location`. This will make it much easier to use from a variety of
commands.
This removes the `args` from the `ProgressTask` passed to
`withProgress`. The `args` is only used by the
`commandRunnerWithProgress` and can easily be replaced by an anonymous
function that passes the `args` instead. This will simplify the
`ProgressTask` interface and make it easier to use.
The `commandRunnerWithProgress` implementation isn't actually any
different from `commandRunner`, except for the call to `withProgress`
and support for an `outputLogger` argument. Therefore, this will simply
make `commandRunnerWithProgress` a wrapper around `commandRunner`,
removing quite some duplication in the process.
If the user requests that extension packs be included in their MRVA run,
then do the following:
1. Search the workspace for all extension packs
2. Add each extension pack as an explicit and direct dependency on
the generated pack.
It is ok to use `*` as a dependency since we are guaranteed that
exactly one version of each injected extension pack dependency is
available when the pack is being compiled.
If we find multiple paths to an extension pack of the same name, this
is an error since it is ambiguous which path to use.
This will add a check to ensure that `showAndLogExceptionWithTelemetry`
is not called when downloading packs. This expectation is placed before
the check for `showAndLogInformationMessage` so that when the test
fails, the error message will be shown.
We are no longer including our dependencies in the VSIX package, so we
can tell VSCE that we don't want it to look at dependencies using
`--no-dependencies`. If we do this, VSCE doesn't require the
`node_modules` directory anymore and we can skip that step, which will
make building significantly faster.
I've confirmed that there are no changes between the two options by
building the extension both without and with the change. This is the
diff of the two outputs (using `diff -r`):
```diff
diff --color -r vscode-codeql-1.8.0-dev.2023.3.8.15.10.13/extension/package.json vscode-codeql-old/extension/package.json
7c7
< "version": "1.8.0-dev.2023.3.8.15.10.13",
---
> "version": "1.8.0-dev.2023.3.8.15.6.51",
diff --color -r vscode-codeql-1.8.0-dev.2023.3.8.15.10.13/extension.vsixmanifest vscode-codeql-old/extension.vsixmanifest
4c4
< <Identity Language="en-US" Id="vscode-codeql" Version="1.8.0-dev.2023.3.8.15.10.13" Publisher="GitHub" />
---
> <Identity Language="en-US" Id="vscode-codeql" Version="1.8.0-dev.2023.3.8.15.6.51" Publisher="GitHub" />
```
The only difference is the version number, which is expected.
When deleting a query history item and the "next" query is still
running, the `completedQuery` is `undefined`. This commit fixes it by
using optional chaining to ensure that the `completedQuery` is defined
before accessing its `successful` property.
This is definitely not a perfect solution since we're essentially just
moving the place where we're casting. However, because we have manually
made the types similar, this provides some type assurances where there
were none before. This also has the cast in only one place, which makes
it easier to find and fix in the future.
I was initially trying to understand why this method was failing due
to an unrelated error [1] so I ended up over-engineering the path
parsing.
We can use the path from the first workspace folder, like we do in
other places in the extension.
[1]: https://github.com/github/vscode-codeql/pull/2104
When a user goes through the Code Tour, we select a dummy `csv` database
for them to get them up and running.
Once they complete the code tour and would like to continue writing
queries, they will need to add their own database.
After they do that, we check the language of their new database and
generate a skeleton QL pack for them so that they don't need to create
these files by hand. See [1] for details.
This skeleton pack folder will be called
`codeql-custom-queries-<language>` and it comes with its own example
query: `example.ql`.
When we try to run this example query, the query gets confused about
which `dbscheme` to pick, as it sees a `qlpack.yml` file in the new
skeleton pack folder, as well as one in the existing `tutorial-lib`
folder.
So we'll need to get rid of the `tutorial-lib` folder in order to make room
for new queries to be run once the tour is complete.
This commit introduces a `handleTourDependencies` step which will
trigger a `codeql pack install` command in order to install real library
dependencies for `tutorial-queries`, since we no longer have the dummy
library in `tutorial-lib`.
Unfortunately `Object.defineProperty` doesn't work on proxies, so I've
added an options object to `mockedObject` which allows passing in
methods that will return a value for a specific property.
In https://github.com/github/codespaces-codeql/pull/12 we moved
the source for our tutorial database into `.tours` in order to
avoid confusing the user when we load the database into the
workspace, since they'd see two databases.
Since this is just the source, we'd like to hide it.
We mark this query as `@kind problem`.
We'll need to change the query a bit to make it fit this type of ...
erm... kind.
This means the results view will be formatted to display the file name
next to each of the results.
We're also getting rid of any mentions of an empty block query since
that's no longer what it checks.
This will remove some instances where we're using `as unknown as T` and
replace them by a call to `mockedObject<T>()`. The `mockedObject`
function is a bit more explicit about what it does and has types which
ensure that the methods that are set on the object actually exist.
Unfortunately, we can't fully get rid of `as unknown as T` in the
`mockedObject` function. However, this construct is more localized and
does not need to be used in as many places. If we do enable an ESLint
rule to prevent the use of `as unknown as T`, I would feel comfortable
with disabling the rule for the `mockedObject` function.
This also removes the type for the `runTests` `options` argument since
it's only used in this definition and don't actually use the type
anywhere else.
It seems that when we added the CSP policy to the webview, we did not
take into account that `d3-graphviz` uses `@hpcc-js/wasm` to load
Graphviz as a WASM module. This commit adds `'wasm-unsafe-eval'` to the
CSP policy to allow this.
This is step 3 in the Code Tour. At this point we don't need to create
the skeleton pack so let's disable that functionality.
Co-authored-by: Shati Patel <shati-patel@github.com>
By default, this is added when we call `runJsonCodeQlCliCommandWithAuthentication`.
However, `codeql pack add` doesn't support this option so we need to turn it off.
Co-authored-by: Shati Patel <shati-patel@github.com>
On windows, the `Uri.path` will return an extra folder, as we can see in the tests:
```
ENOENT: no such file or directory, open 'D:\C:\Users\RUNNER~1\AppData\Local\Temp\tmp-4784XPDQPb5jM6IW\test-ql-pack-ruby\qlpack.yml'
```
Let's use `Uri.fsPath` instead.
We were initially always removing the last folder in the workspace as
we assumed that would be the directory we use.
Now that we've switched to using a temporary directory, this is no longer
the case so we need to find the index of the directory in the list of
workspace folders and then use that index to remove the directory.
`Uri.parse` will not work with Windows paths as it will consider `C:\path`
to indicate a file scheme (the "C:" part) and will complain about it.
With `Uri.file` we can build the URI without hitting this complication.
We're checking that the skeleton QL pack doesn't exist as a workspace
folder, so we should be creating this folder in the workspace as well.
Initially this was being created in VSCode's local storage.
This will receive a folder name and language.
It will generate:
- a `codeql-pack.yml` file
- an `example.ql` file
- a `codeql-pack.lock.yml` file
It will also install dependencies listed in `codeql-pack.lock.yml`
file.
We were initially planning to call the `packInstall` command once
we generate `codeql-pack.yml` in order to install dependencies.
However, the `packAdd` command does this for us, as well as
generating a lock file.
Rather than trying to craft the lock file by hand, we're opting
to use the cli command.
NB: We're introducing a new `QueryLanguage` type which is identical
to the `VariantAnalysisQueryLanguage`. In a subsequent PR we'll
unify these two types.
Similar to what we do with `codeql pack install`.
Tnis will simulate us running `codeql pack add codeql/<language>-all`.
We're going to need in order to:
- generate a lock file (codeql-pack.lock.yaml)
- install the correct packages for our skeleton QL pack based on the
lock file.
This removes the remote queries history item as a supported history
item. This allows us to delete almost all code related to remote
queries except for the React view code which will be removed separately.
In the query serialization code, we now ignore remote queries.
This would run the unit, view, integration and CLI integration tests in
parallel, which would cause problems with multiple VSCode instances and
use a lot of memory.
- Add a new config that toggles between using all data extensions or
none.
- If using all data extensions, then before a query evaluation, run a
`codeql resolve qlpacks` command with the new `--kind` option. This
will return a list of extension packs in the workspace.
- Pass these packs to the cli before evaluation/
- This will only work with CLI v2.12.3 (not yet released) or later.
- Also include some CLI tests to ensure this works.
This will add support for the `codeql-pack.yml` filename in all places
where we currently support `qlpack.yml`. It centralizes the list of the
supported filenames in a single place and a method that can figure out
the correct filename to use.
This download download the sourcemaps from the release asset if it is
available. Unfortunately, I'm not able to test this yet because we don't
yet have any releases with sourcemaps attached. As a fallback, it will
still try to download from the workflow run.
Most of the warnings that are currently being shown in CI are coming
from VSCode. We can hide the VSCode output to make the CI logs more
readable. This should not influence the tests; the output of tests
(in particular using `console.log`/`console.error` etc.) will still be
shown.
See: 0c98dc12ad/packages/jest-runner-vscode/src/public-types.ts (L58-L68)
We'd like to add test coverage for the openDatabase function (which is
public).
At the moment, this relies on `resolveDatabaseContents` which is just
a standalone function.
This means we're unable to mock it using Jest.
So let's move it into its own class.
This method in turn depends on a `resolveDatabase` function, which we've
also moved into the new class.
The only usages I could find for there functions were from within
the `databases.ts` file.
This is unrelated to the changes in this PR but it's causing CI to fail.
```
config listeners › CliConfigListener › should listen for changes to 'codeQL.runningTests.numberOfThreads'
expect(jest.fn()).toHaveBeenCalledTimes(expected)
Expected number of calls: 1
Received number of calls: 2
109 | const newValue = listener[setting.property as keyof typeof listener];
110 | expect(newValue).toEqual(setting.values[1]);
> 111 | expect(onDidChangeConfiguration).toHaveBeenCalledTimes(1);
| ^
112 | });
113 | });
114 | });
```
We don't need to check that the callback is triggered a certain number of times, just that it works
so we can change this test to be more permissive.
We'd like to make it easier for a user going through the CodeQL Tour to
write their queries.
To help them along, we can generate skeleton QL packs once we know which
database they're using, instead of expecting them to know how to create
this themselves.
We're then able to download the necessary dependencies for their CodeQL
queries.
This checks that we're running the CodeTour by looking for the
`codeQL.codespacesTemplate` setting.
This will upload the sourcemaps produced as part of the release as a
release asset. This allows the sourcemaps to be downloaded and used
for decoding stack traces beyond the 90 day artifact limit of GitHub
Actions.
We run `npm run lint` every time we do a `git push`.
This takes quite a long time, and the lint command has already been run
when we created the commit in the first place.
Could we instead skip this and rely on CI to tell us if we've failed
to address a linting issue?
It seems like the `onDidChangeConfiguration` is being called multiple
times. It doesn't actually matter that it's being called twice, so we
just need to ensure it's called at least once.
This is blocking us from merging new PRs so while we figure out
how to fix them, let's skip the tests that are failing on our
`main` branch.
For full context: the tests started failing when a new version of
VSCode was released (1.75.0).
This adds support for mapping full stacktraces in the source map
script. This allows you to pass a full stacktrace to the script and get
back a stacktrace with all original positions.
This adds a script that can be used for retrieving the original source
location when given a location in the released extension. It will
download the source map from the Actions workflow run of the release and
use the `source-map` library to extract the original location.
In our tests, we were writing settings files to disk because we were
using the VSCode configuration API which writes settings to files. This
results in flaky tests because concurrency can cause the VSCode API to
misbehave.
This will switch the tests to use a mocked API by default. For some
tests the real implementation is used, but the large majority of tests
is now using a mocked version which only keeps track of the
configuration in memory. This makes it easier to reset the state between
tests since we can just empty out the in-memory configuration.
We have a codespace template which houses our CodeQL tour:
https://github.com/github/codespaces-codeql
This contains a repo with a default databases already loaded
for the user so that they can start writing queries more quickly.
At the moment we're asking the user to manually right click on
the database folder ('codeql-tutorial-database') and set it as
the current database.
We can take this one step further by defining a command that gets
triggered when we arrive at the step for setting up the database.
The command ("codeQL.setDefaultTourDatabase") will build the URI
pointing to our preloaded database and set it as the current one.
We initially considered whether we can re-use the setCurrentDatabase
command and pass the URI of the database from the codespace itself,
but the URI would be hardcoded as:
```
file://0-62/workspaces/codespaces-codeql/codeql-tutorial-database
```
as we can only pass the codeTour extension a command and string
parameters.
This would have been brittle as the filepath for a codespace might
change in the future.
Instead we can define a custom tour command ("setDefaultTourDatabase")
to look at the current workspace folder and build the path to the
database in the CodeQL extension.
Co-authored-by: Shati Patel <shati-patel@github.com>
Instead of using the third party `peter-evans/create-pull-request`
action for creating a PR for releases, we can use the already present
`./.github/actions/create-pr` action which is also used for the PR for
bumping CLI versions.
Instead of having different ESLint configuration files in each
directory which don't seem to inherit the configuration correctly, this
will add `overrides` in the root file.
This will add a welcome view to the database panel which is shown when
the controller repository is not setup. This welcome view will show a
button which can be used to set up the controller repository.
3 seconds is a really long time to wait for downloads since a
significant percentage of downloads will complete within 3 seconds. This
changes the update delay to 500ms which should still give us good
performance, but also make the download feel more responsive.
This removes all CodeQL CLI version constraints for unsupported CLI
versions (< 2.7.6). The oldest supported CLI version is 2.7.6 since GHES
3.3 recommends using CodeQL CLI 2.7.6.
Tests whether we choose "Yes" / "No" in the new modal window.
"Yes" -> remove the item and show you a toast notification
"No" -> don't remove item
This only shows up for in progress items.
We now have special behaviour for removing an "in progress" query so the
tests will be different.
Let's have a separate section for "in progress" queries. We'll add extra
behaviour testing in the next commit.
When you attempt to delete a query that's still in progress from your
query history, you get a prompt asking if you're sure.
If you pick "Yes", the item is removed and you see a toast notification
with a link towards the GitHub Action.
This will supply the GitHub access token to certain CodeQL CLI commands
such that private packages can be resolved. It will only do so if the
user has an existing auth session. If they don't, they will now get a
prompt to login. However, this will only happen for commands which
actually use authentication, which is limited to packaging commands.
This will set the `mode` of Webpack to `production` for release builds.
It will also stop inlining the sourcemap and instead produce a separate
file which is excluded by `.vscodeignore`.
In terms of the bundled extension, this will add 1 file
(`out/webview.js.LICENSE.txt`). It decreases the size of the VSIX file
from 4.28MB to 1.77MB.
VSCode was not able to find the original source of the bundled
extension because it was looking for the source in the `out` directory.
By setting the `sourceRoot` to the `extensions/ql-vscode` directory
which is located at `..` from the `out` directory, VSCode is able to
find the original source and breakpoints are hit.
This will copy the WASM file from source-map to the output directory.
This makes the source-map package work. See the comment in the code for
more details.
This bundles the extension files using esbuild, as recommended by
VSCode. This reduces the size of the extension from 34MB to less than
5MB.
Gulp will still run TypeScript to check types, but will not use the
TypeScript compiler output in the bundle.
Tests are now run separately, outside of Gulp, so their data doesn't
need to be copied anymore.
See: https://code.visualstudio.com/api/working-with-extensions/bundling-extension
Some checkouts of the github/codeql repo, such as the
internal submodule, may be named `ql` rather than
`codeql`. Allow this folder name when running tests.
I am removing these assertions so that our internal integration tests
can pass. They are currently failing because the number of dependencies
of the `codeql/javascript-all` pack has changed. It no longer makes
sense to test this value as newer versions of this pack will have more
dependencies and we expect this value will continue to go up.
This was initially added [here][1] but wasn't quite in the right place
to have the intended effect.
Let's move it up to the root of the project.
[1]: f515663640
This moves our existing test plan under a "Required testing" section.
We're also adding the scenarios used for testing live results under an "Optional testing" section.
I believe this doesn't change the user-visible behaviour at all. The user
won't be prompted to log in any more or less often than they would have
done before.
One benefit of this is that we can remove the registerListeners method
because we no longer need to know if the cached octokit is still valid.
Instead we just call vscode.authentication.getSession every time and it
will return the current session, which might be different from the last
time we called it. This might prompt the user to log in, but that would
have happened anyway because when the session changed we would have
overwritten our cached octokit instance.
Another benefit is that we no longer need the extension context and this
removed a surprisingly large amount of code where we are passing this
parameter around because we need it for the credentials.
The only downside I can see is that we call getSession more often and
create more javascript objects in general. I believe the performance
impact of this will be negligible and not worth worrying about.
I argue that calling createOctokit(false) adds no benefit. If an
authenticated session already exists then this silently create an
octokit, which makes getOctokit() a no-op just returning the field.
However if there is already an authenticated session then getOctokit()
would already be able to create an octokit without prompting the user.
On the other hand if there isn't an authenticated session then we
won't be able to pre-populate an octokit, so getOctokit() will have
to prompt the user anyway.
Not calling createOctokit(false) in registerListeners also doesn't
change behaviour. If the user is authenticated in the new session then
we would be able to create an octokit instance wihtout prompting in
getOctokit anyway. If the user is not authenticated in the new session
then we won't be able to create an instance without prompting either way.
The only benefit I can think of is that it moves a tiny amount of
computation earlier in the pipeline, but the amount of computation is
tiny and it isn't any more async than it would be if it happened in
getOctokit(). I don't think this is worth making the code more complex.
This was only used from initializeWithToken and only added a completely
separate case to the start of the method, effectively turning it into
two separate implementations. Therefore we can make things simpler by
inlining this case in the one place it is used.
It is true by default and no place in the codebase sets it to false. We can
simplify the code by removing this case we aren't using. If we want this
behaviour in the future we can always implement it again, but I think it's
likely to be unnecessary and if you don't want authenticated requests then
you likely won't be initializing a Credentials object.
This cannot happen already, even before the other changes in this PR.
The Credentials.initialize method can never return undefined, so these
checks would never return true. The real place that checks that we are
authenticated is in the vscode.authentication.getSession method, and
it will reject the promise if the user declines to authenticate or
anything else means we can't get an authenticated session.
I feel justified in removing the tests for these cases because the
code was never actually called in production, and we are covered by the
vscode authentication library rejecting the promise. Any exception
thrown from Credentials.initialize would behave the same as the one I'm
deleting.
This will sort the files in an exported Gist by the user-defined sort
order. It does so by prefixing the files with `result-{index}-` where
the `index` is the 1-based index of the repository in the sort order.
It will automatically pad the index with leading zeros to ensure that
the files are sorted in the correct order.
Unfortunately, we can't just use `{index}-` because numbers sort before
the `_` character, which is used in the summary filename to place it
first.
There are also some changes in how we determine which repositories to
export since we need to know in advance how many zeroes we need to pad
the index with. There should be no functional changes in which
repositories are actually exported.
The `tsconfig.json` inside `gulpfile.ts` needs to match the root
`tsconfig.json`, so by making it extend the root `tsconfig.json` and
changing just the options which decide which files are included, we can
remove a lot of duplication.
Instead of deleting the complete `_VSCODE_NODE_MODULES` object, we now
use a `Proxy` to intercept the `_isMockFunction` property. This is safer
and will not delete a global variable that VSCode expects to exist.
This fixes the tests on VSCode 1.74.0. The issue is as follows:
1. Jest wil try reset all mocks after each test, as it should.
2. When Jest does this, it will loop over all global variables and check
if they are mocks.
3. One of the global variables it checks is _VSCODE_NODE_MODULES, which
is a proxy object.
4. When Jest checks whether it is a proxy by getting _isMockFunction on
it, the `get` function on the proxy object will be called.
5. This will in turn call require, which will try to load
the non-existing `_isMockFunction` module. This throws the error we are
seeing.
By removing the `_VSCODE_NODE_MODULES` property from the global object
in the Jest environment, Jest will not try to reset it, and the tests
should work again.
See: 41bf230089/packages/jest-runtime/src/index.ts (L1173-L1186)
See: ed442a9e99/src/bootstrap-amd.js (L15)
This commit adds a new step to the CI workflow that runs type checking
on all directories containing `tsconfig.json` files, using `find` and
`xargs`. Unfortunately, this does not work on Windows, so on Windows
it's not possible to run all of these type checks locally.
This adds the environment variables necessary for running the date test
in all of these cases:
- When running the npm script outside of VSCode (using `cross-env`)
- When using the Jest Runner "Run" option (`terminal.integrated.env.*`)
- When using the Jest Runner "Debug" option
This integration test will check that the monitor will actually make
multiple requests to the API and that it will trigger a download
extension command for each repo that has finished scanning.
Unfortunately, one of the tests we have for local queries doesn't seem
to be working for variant analyses. I'm not sure why it isn't
working, but I think it's better to get the rest of the integration
tests in and then figure out what's going on with that one.
When rehydrating remote queries, we were awaiting the monitoring
command. Since this command may take minutes to hours to complete, it
seems like this would block the extension from loading. This is the same
issue as in https://github.com/github/vscode-codeql/pull/1698, but for
remote queries instead of variant analyses.
The repository selection was structured such that you would get in the
`else` case if there was nothing selected, but this case would also be
used if for some other reason the selected item was not valid.
This restructures the conditions to first check whether the user
cancelled out of the operation and will silently return in that case. In
other cases where it cannot determine the repositories, it will now show
a user-visible error.
This will hide the "Analyzed" panel when there are no scanned repos and
it's completely empty.
When all three panels are empty, this will also hide the search bar and
filters, and will skip rendering anything for the panels.
This restructures the variant analysis manager tests to follow this
pattern:
- Class
- Method
- Context
- Context
- ...
- Test
Before, we were only using this pattern for some of the tests and this
made it confusing were which method was being tested.
By splitting this off, it will also be easier to move some of these
tests out of the cli-integration tests and into the no-workspace or
minimal-workspace tests.
This will add a spinner to each repo row when the results for a
particular repo are loading. It will also disable the row to make clear
that it is loading and not clickable.
This adds a new filtering on SARIF code snippets for very large code
snippets (defined as 8MB or more). If less than 1% of such a snippet
is highlighted, it will not include the code snippet in the analysed
results, and it will thus not be shown in the UI.
This is to avoid very large SARIF files that can cause the extension
host to crash when the analysis results are send to the UI. I don't
think any of these snippets would ever be useful to show, so it should
be fine to just not include them.
The extension doesn't actually use anything regarding the language of
variant analyses, so this just updates some types.
The actual Swift support is done in the CLI, which is also used for
determining which languages are actually supported. So, the environment
variable is already used by the CLI for showing supported languages.
We were using two different implementations for opening the query file
and query text between the query history and the results view. This
moves the better implementation in the view to a command and uses these
commands for opening the query text/file in the query history and view.
This results in consistent error messages and behaviour between the two
different views.
This will add error handling to the retrieval of variant analyses in the
monitor by catching the error. It will show a warning to the user and
log it. Then, it will simply sleep for 5 seconds and try again.
I'm not sure if we want to show all of these errors to the user since
this can result in many warnings popping up if many variant analyses are
being monitored, but this is probably that the user should be made aware
of.
This will add a progress notification to exporting results to give users
feedback about what's happening.
Unfortunately, we need to change some things in how we handle the
actions on completion notifications since we want the progress
notification to disappear when that notification shows. This results in
us having to remove the `await` on the
`showInformationMessageWithAction` calls.
The Gist title in the result export didn't take into account the actual
number of exported repositories, it only used the scanned, unfiltered,
repositories in the variant analysis. This switches it to use the actual
exported repositories for determining the result and repository counts.
This is somewhat more complicated than we'd expect it to be since the
results are being read in async, so we need to switch the order of
operations and store some additional information for being able to
compute this information. However, this also makes the code somewhat
easier to understand since the summary file is now being created in only
1 location, rather than being split between a method and a for-loop.
When deserializing a webview, it could happen that a view was already
manually opened by the user before the webview was deserialized. This
would result in duplicate webview tabs, which is not supported by the
manager.
This will close the webview that is being deserialized and focus on the
existing view. This should ensure that we never have duplicate loaded
webview tabs. There could still be duplicate webview tabs if there are
non-deserialized tabs, but once it is opened it should be closed
automatically.
This will disable the export and copy buttons when no results would be
exported by executing the command. In contrast to the "normal" filtering
in the view, this will also take into account the checkboxes since those
are also used in the extension host.
This will allow exporting results for a variant analysis which is
cancelled or in-progress. Repositories for which the results are not yet
available or which have not yet been downloaded will not be exported.
The header of the summary file is incorrect, but this will be fixed in
a follow-up PR.
During the build process, we now get errors pointing out that we import two different
items and call them `QueryServerClient`. One is actually the legacy one and has been
superseded by the new one so far.
Let's fix this and appease the linter.
Full errors:
```
[gulp-typescript]
/home/runner/work/vscode-codeql/vscode-codeql/extensions/ql-vscode/src/extension.ts(88,9):
error TS2300: Duplicate identifier 'QueryServerClient'.
[gulp-typescript]
/home/runner/work/vscode-codeql/vscode-codeql/extensions/ql-vscode/src/extension.ts(89,9):
error TS2300: Duplicate identifier 'QueryServerClient'.
[gulp-typescript]
/home/runner/work/vscode-codeql/vscode-codeql/extensions/ql-vscode/src/extension.ts(1596,30):
error TS2345: Argument of type
'import("/home/runner/work/vscode-codeql/vscode-codeql/extensions/ql-vscode/src/legacy-query-server/queryserver-client").QueryServerClient'
is not assignable to parameter of type
'import("/home/runner/work/vscode-codeql/vscode-codeql/extensions/ql-vscode/src/query-server/queryserver-client").QueryServerClient'.
```
In the next commits we'll turn these rules on, one-by-one, and then
autofix the offenses.
At the end we'll be left with the rules that require manual attention.
This adds tests for the `loadResults` method of the variant analysis
results manager. It tests that SARIF results can be successfully
loaded and that the `onResultLoaded` event is fired.
The monitor return value was only used in tests, but we can also assert
the correct behavior using the calls it makes, rather than using the
result of the monitor.
The header color of a stat item was using the badge foreground color,
but badges can have a different background color than the editor. For
some themes, this would result in unreadable text. By using the editor
foreground color, the header should be readable in many more themes.
This adds four new VSCode themes to Storybook which will allow us to
more easily test these themes in Storybook. These themes were chosen
because they are either used for accessibility (the high contrast
themes) or are currently not compatible with the variant analysis UI
(there are items that are not visible).
When results were already cached in memory and the view requested the
result, it would not be loaded because the event would not be fired.
This fires the event when the result is loaded from cache as well, to
ensure that the view always receives the result.
This will stop the variant analysis monitor from monitoring when a
variant analysis is removed from the query history. Since the variant
analysis monitor cannot depend on the variant analysis manager (this
would create a circular dependency), a function is passed into the
variant analysis monitor for checking whether the variant analysis
should be cancelled.
This commit will also ensure that even if a variant analysis comes in
through the `onVariantAnalysisChange` callback, it won't be added to
the variant analysis map of the manager.
On Windows, we were showing the full path to the query, rather than just
the filename. This is because the `path` package being imported was
actually `path-browserify` which only claims support for POSIX. Since
Windows uses backslashes rather than forward slashes for paths, this
resulted in the full path being shown.
This creates a new `basename` function that works on both POSIX and
Windows by detecting whether a POSIX or Windows path is given. This
ensures that the correct path is shown on Windows, and will also ensure
that we show the correct path on Linux if the user has opened a variant
analysis that was originally created on Windows.
This will update the `jest-runner-vscode` patch to retry tests that fail
due to no test result being returned from the test runner.
This will also add some retries to the `minimal-workspace` and
`no-workspace` tests to help with flakiness.
Multiple VSCode instances were being launched when a second instance of
VSCode was being spawned with the same user data directory. This is
probably because VSCode restores the windows from the previous session,
even when `-n`/`--new-window` is passed.
This fixes it by patching `jest-runner-vscode` to always create a new
temporary user data directory, rather than re-using the same one for
all test suites.
This will patch `jest-runner-vscode` to retry tests. This is a temporary
test to see if this will help with the flakiness of the CLI integration
tests.
The biggest problem with this is that it will launch multiple VSCode
instances on every retry:
- First try (not a retry): 1 instance
- Second try: 2 instances
- Third try: 3 instances
- etc.
I'm not sure why this is happening and can't really narrow it down to a
specific cause. Even if I change the `runVSCode` call for the retry by
a simple `cp.spawn` call, it still launches multiple instances.
This adds debugging support to jest-runner for the integration tests
when they are run from the `out` directory. Unfortunately, this removes
the ability to debug the non-integration tests, such as the pure tests.
Instead of running the integration tests from the `out` directory, this
will run the integration tests from the `src` directory using `ts-jest`.
Unfortunately, we are not able to use TypeScript files for the
`jest-runner-vscode` configuration since `cosmiconfig` (the package that
handles the configuration loading for `jest-runner-vscode`) doesn't
support loading TypeScript files by default.
Since we are launching a completely different process for the extension
tests than the process that is launched by VSCode, we need to add some
special handling for the debugging.
This will let the extension host/VSCode expose a debugging port, which
VSCode will then connect to. This is "less desirable than letting the
bootloader do its thing", but a packaged VSCode application does not
allow using the bootloader (`NODE_OPTIONS`=`--require=...`). Therefore,
we have to fallback to this option.
See: 47c60558ec/src/configuration.ts (L405-L411)
Apparently, we're not importing the same `config` file as is used by the
actual extension, so mocking methods in this file does not do anything.
However, we can mock the `vscode` module, so we can use this for
returning different values in the configuration.
We also need to mock the authentication session since we don't have one.
This will ensure all mocks are restored after every test. This required
a significant amount of changes in the tests since `jest.spyOn` now
needs to be called in `beforeEach`, rather than in the `describe` block.
There were some things that were breaking due to version checks. Since
we aren't testing on these CLI versions (2.7.2 and 2.7.4 or older)
anymore, we can remove these checks and simplify the tests.
For `to.contain`, `jest-codemods` seems to have converted these to be
`expect.arrayContaining`, even for strings. This will make the correct
change for strings.
Jest does not support skipping tests when the test has already started
(which could also be in a before hook), so we need to manually return
from the tests when the CLI version does not support a tested feature.
Instead of calling `fail`, we can just let the error be caught by Jest,
which will automatically fail the tests. For other instances where we're
calling `fail` in case an error was not thrown, we will instead use
`.rejects.toThrow`.
This is a first pass on converting the cli-integration tests to Jest. It
uses a custom Jest runner (based on the jest-runner-vscode) to install
required extensions. It will also necessary to move some code for
installating the CLI to ensure it was possible to install the CLI from
outside VSCode.
Most of the conversion has been done by `npx jest-codemods`, but all
migration of Sinon has been done manually.
The filter and sort tests were located inside the React tests since they
were already using Jest. Now that the pure tests have been switched to
Jest, these tests can finally be moved to the "normal" pure tests.
The timeouts need to be set either on a per-file basis, or per test by
using the parameter in `it`. Since we have both Mocha and Jest types, we
need to declare in the test file which one we're using.
This migrates all no-workspace VSCode integration tests to Jest by
running `npx jest-codemods`, followed by manual fixes (mostly for Sinon
and Proxyquire).
When `jest-codemods` was run, it replaced the error message `array.join`
by a comment for the error message. Since Jest does not support custom
error messages out-of-the-box, this will instead do an equality check
with an empty array, which will ensure that the received array is
printed.
The config store was not being disposed in tests, resulting in Chokidar
watchers being left open. This was causing tests to not exit since there
were still open file descriptors.
This commit also fixes the `DbConfigStore` to make the correct `super`
call in its `dispose` method.
This converts all pure tests to Jest. This was done by first running
`npx jest-codemods` with the Mocha transformation, then manually fixing
any places where it hadn't automatically converted the correct thing
or had missed things (mostly Sinon).
This also sets up VSCode correctly for running Jest.
This moves the view Jest config to the view directory and adds a new
jest.config.js file which only references the view directory as a
project. This will make it easier to add multiple Jest configs for
separate projects.
`${workspace}` references are new in CLI version 2.11.3. These mean that
the version depended upon in a pack must be the version available in the
current codeql workspace.
When generating a variant analysis pack, however, we copy the target
query and generate a synthetic pack with the original dependencies.
This breaks workspace references since the synthetic pack is no longer
in the same workspace.
A simple workaround is to replace `${workspace}` with `*` references.
This adds Prettier and makes it replace tsfmt. VSCode is set to use
Prettier for formatting TypeScript/TSX files and format on save since
Prettier is very fast and does not cause any noticeable delay.
This adds a new options argument to the `loadResults` method which
allows the caller to specify that the results should not be saved to the
cache. This exposes a smaller API surface and makes it harder to misuse
the methods.
Each variant analysis export can be different due to different filters,
so there are two options:
- We need to clean up the directory before each export to ensure no old
files are left
- We need to use a separate directory for each export
This implements the second option, which is more flexible and allows the
user to retain different result exports.
This adds filtering (based on search and selected repositories) and
sorting to exporting results. This is done in the same way as for
copying the repository list, so the changes are fairly minimal.
This will use the selected repositories to limit which repositories are
included in the copied repo list. If there are both selected
repositories and a search filter (on the full name), the search filter
will be ignored and the selected repositories will be used in full.
This will pass the filter and sort parameters in the export repo list
message so it can be used by the command to filter and sort the
repositories which are placed in the repo list.
These functions can be re-used by the sorting and filtering code for
exporting results and copying repository lists, so these should not be
in the view directory.
The tests have been kept in the same place for now, but they should be
moved to the pure tests directory once those have been switched to Jest.
I figured it wasn't worth it to convert these to Mocha, and convert them
back to Jest in a week.
This will add a new `useState` call on the top-level to keep track of
the checkbox state. It will allow all downloaded repositories to be
selected. This will allow us to make the copy repository list and export
results button dependent on the selected repositories.
This moves some of the code that is specific to remote queries out of
the `run-remote-query.ts` file and instead places it in separate files
that only deal with remote queries, rather than also dealing with
variant analyses.
The `runRemoteQuery` and `runVariantAnalysis` were returning values
which were only used in tests. This removes them and replaces the tests
by expectations on the commands called by the methods.
This adds the export of variant analysis results. This is unfortunately
a larger change than I would have liked because there are many
differences in the types and I think further unification of the code
might make it less clear and would actually make this code harder to
read when the remote queries code is removed.
In general, the idea for the export of a variant analysis follows the
same process as the export of remote queries, with the difference being
that variant analysis results are loaded on-the-fly from dis, rather
than only loading from memory. This means it should use less memory, but
it also means that the export is slower.
There was only a single command for exporting variant analysis results,
which would either export the selected result or a given result. From
the query history, the command was always calculating the exported
result, while we can just give a query ID to export.
This will create two separate commands for exporting results, one for
exporting the selected results (user-visible) and one for exporting a
specific remote query result. This will make it easier to add support
for exporting variant analysis results.
I'm not sure if there will be impact from renaming the command. I expect
the only impact to be that the command history might not show the
command in the correct place (i.e. it disappears from recently used
commands), but please check if that is the only impact.
This removes the `runRemoteQuery` method and instead moves all logic
specific to remote queries/variant analysis to the remote queries
manager and variant analysis manager respectively. This will make it
easier to completely remove the remote queries manager in the future.
Now that we do not have a dry run mode, we can create and clean up the
temporary directory in the same function. This allows us to remove the
complete try..finally block inside `runRemoteQuery` and move it to a
much more local spot.
The remote queries tests were testing the data on the filesystem, rather
than the data submitted to the server. This required using a `dryRun`
parameter to prevent deleting the temporary directory, while we can
actually just test against the submitted data.
This will create an in-memory filesystem of the submitted query pack by
un-tar-gz'ing the query pack into memory and using that to test the
existence of certain files.
There is some common logic between remote queries and the variant
analysis flows which deals with parsing the query and asking the user
how to run the query. This extracts that part of the logic to a separate
method such that the only logic left in the actual `runRemoteQuery`
method is related to submitting the query.
This also changes the failure reason alert component to remove the logs
button since it's not used by any failure reason. Instead, a link is
added into the message for a failed Actions workflow using which the
Actions workflow run may be opened.
Shared by the AST viewer, jump to def, and find references
contextual queries.
This allows contextual queries to have their dependencies
resolved and be run whether the library pack is in the
workspace or in the package cache.
Clear the CLI server's pack cache before installing packs,
to avoid race conditions where the new lock file is not
detected during query running.
Adjust some helper methods.
If the library pack containing the AST query does not have
a lock file, it is likely to be in the package cache, not
a checkout of the CodeQL repo.
In this case, use `codeql pack resolve-dependencies`
to create a temporary lock file, and `codeql pack install`
to install the dependencies of this library pack.
This allows the CLI to resolve the library path and
dependencies for the AST query before running it.
We were not yet showing any errors when a result download had failed.
This adds a warning icon to any repositories for which the download has
failed and allow expanding the item to show an alert.
This will show a message for the failure reason in the variant analysis
view when the variant analysis has failed. There don't seem to be
designs for these alerts, but we will need to do a full design review of
the view at some point anyway, so I don't think the exact text is
important.
The variant analysis view was missing an alert when the variant
analysis was canceled. This adds it, and also adds a story for checking
what the view of a canceled variant analysis looks like.
Version checks are re-enabled whenever the version of vscode changes.
This is because the user would have needed to manually update their
vscode version in order to get this new version. And another failing
version check would mean there is a newer version that needs to be
downloaded.
It seems like the expansion of the test files pattern is different
between Windows and Linux/macOS. This fixes it by allowing Mocha to
expand the glob pattern rather than the shell which should fix the
inconsistency.
This implements the "Stop query" button on the view. It moves some of
the logic of actually cancelling the variant analysis to the manager
instead of being in the query history to allow better re-use of the
code.
This also adds tests for cancelling a local query and a remote query.
NB: We only cancel queries that are in progress, so the tests check
the behaviour both for in progress and not in progress items.
This removes all usages of the `gh-api` types from the variant analysis
code by replacing it by the same types defined in `shared`.
This is a breaking change for the query history since the files
serialized to disk now also change. However, since this is still behind
a feature flag the change should be safe to make now.
This adds sorting to the variant analysis repositories on the outcome
panels. The sort state is shared between all panels, so unlike the
design this doesn't disable the sort when you are on e.g. the no access
panel.
We're making a number of changes:
1. We're changing the userSpecifiedLabel value to be
`user-specified-name` instead of `xxx`
2. For local queries, we're changing `in progress` to `finished in 0
seconds` when the query has results. The previous version was
contradictory because any query still in progress wouldn't have results.
3. Similarly, for remote queries, we're changing `in progress` to
`completed` when the query has results. Here we actually set a `status`
property which means `in progress` becomes `completed`.
One factory method to rule them all!
There were a number of problems with these methods:
1. We were previously using two different factory methods to generate
a fake local queries. Ideally we'd just have one.
2. We weren't really creating a real LocalQueryInfo object, which
blocked us [1] from being able to correctly understand which fields we
need in our tests and how they interact together.
3. We stubbed a bunch of methods on the original object to get our tests
to work. We can now use a real object with all the trimmings.
[1]: https://github.com/github/vscode-codeql/pull/1697#discussion_r1011990685
Again, we'll need these for sorting.
We also want to be able to set/unset a userSpecifiedLabel. Since this factory
method is used in `history-item-label-provider.test.ts`, we have tests there
that count on this custom label being defined/undefined.
This adds a new textbox to the outcome panels that allows filtering by
the repository full name (e.g. `github/vscode-codeql`). The filtering
uses the same logic as the existing remote queries filter, i.e. by
converting the input and the repository full name to lower case and
checking the the latter includes the former.
Both `copyNoWorkspaceData` and `copyCliIntegrationData` return
promises. Since file copy-ing is quite fast at the moment, this
hasn't been a problem, but it might become a problem in the future
if we start copying larger files.
Let's wait for the operations to finish.
Now that we have a watch command to check when our test files
need updating, we don't need to do this step during the setup.
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
Because we're no longer running `gulp` when we run our test command,
we're going to need a way to update our test files when they change.
This will watch for any changes in our test files and copy the new
version over.
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
This "config store" creates a `dbconfig.json` file (if it doesn't yet exist),
and reads the file to load the database panel state.
Only the database config store should be able to modify the config
— the config cannot be modified externally.
We previously attempted to speed up no-workspace tests [1] but realised
we still needed to run some setup steps to get the latest files [2].
Given that we already have `npm run watch` running in the background
when we run our tests, we should be able to regenerate files on the fly.
This means we can drop `gulp` from our setup steps when running integration
tests.
While there's still a danger that you forget to run `npm run watch` in
the background, we think the massive speed up (10s -> 1s) is worth it
as we add more and more tests to this extension.
[1]: https://github.com/github/vscode-codeql/pull/1694
[2]: https://github.com/github/vscode-codeql/pull/1696
The tests were expecting the wrong results, except for the case where
the time was less than a second. For less than a second ago, it makes
sense to return "this minute". For times that are 2.001 minutes ago, it
makes sense to return "2 minutes ago" rather then the previous behaviour
of "3 minutes ago".
The `not_found_repo_nwos` field doesn't actually exist (anymore?) on the
GitHub API. The correct name is `not_found_repos`, so this renames the
field on the type and in the scenarios.
This uses a script to add the new `stargazers_count` and `updated_at` to
the scenario files. This is done by using the GitHub API to get the
information for each repo and then updating the scenario file.
The `updated_at` values are not completely representative since they are
the `updated_at` at time of running the script, rather than at the time
the variant analysis was run. However, this should not really matter in
practice. An alternative for scanned repositories might be getting the
creation time of the `database_commit_sha` commit.
Paired with @robertbrignull on debugging why having all types of
query history items isn't playing nicely when we try to remove an item.
We've tracked down the issue it the handleRemoveHistoryItem method
not correctly setting the `current` item after a deletion.
However, it's unclear whether the test setup is to blame or this is a
real bug.
I'm going to leave the tests for `handleRemoveHistoryItem` to test just
local queries for now (as they were originally) and will come back to
this in a different PR.
This will add the star count and last updated fields to the repository
row. We are able to re-use some components from remote queries, but we
cannot re-use `LastUpdated` since it requires a numeric duration, while
we are dealing with an ISO8601 date.
It seems like the Storybook stories were not being type-checked by CI
and got out-of-sync with the required types. This fixes the types and
also uses the factories to reduce the chance of this happening with
future changes.
We were expecting all three types to behave the same when clicked /
double clicked.
In fact local & remote queries only allow you to open the results view
when they're complete, while variant analyses always allow you to open
the results view no matter what their status.
Let's break down these tests per history item type and test the
expected behaviour more granularly.
NB: I tried moving some of the setup in beforeEach blocks, but alas
queryHistoryManager can be undefined so rather than adding `?` to
every method call, I'm just gonna leave the setup inside the tests.
In an ideal world, we'd stop declaring `queryHistoryManager` as
`undefined`:
```
let queryHistoryManager: QueryHistoryManager | undefined;
```
Baby steps!
In [1] we changed our factory methods to actually use QueryStatus when
creating remote query & variant analysis history items.
Previously we were just setting the value to `in progress`...
... which made the tests for history-item-label-provider.test.ts pass...
... but that value did not reflect reality ...
What we actually need to do is introduce a method to map different
query statuses to human readable strings, e.g.
QueryStatus.InProgress becomes 'in progress'
[1]: 4b9db6a298 (diff-217b085c45cd008d938c3da4714b5782db6ad31438b27b07e969254feec8298aL28)
We've introduced a new `local-query-history-item.ts` factory method [1]
which includes a cancellation token. The factory will need to import the
CancellationTokenSource from `vscode`.
We already had a factory method but it didn't quite map with the setup
we needed. For example we need to call `.completeQuery` rather than
providing a dummy `completedQuery` object.
The previous factory method was used in the tests for
`query-history-info.test.ts`. Because that factory omitted the
cancellation token, we could get away with having these tests in the
`tests/pure-tests` folder.
With the addition of the second factory method, the tests for
`query-history-info` blow up because they can't find `vscode`.
Now that we need to add more fields to local query history items, it's
becoming clearer that these `query-history-info` tests should live next
to the `query-history` tests in `vscode-tests/no-workspace`.
Granted, in an ideal situation we'd only have one factory method to
generate a local query history item, but combining these two methods
is actually quite painful. So for now let's at least have the query
history tests next to each other and appease Typescript.
This adds the new `stargazers_count` and `updated_at` fields in the
repositories to the appropriate `gh-api` and `shared` types.
To make testing easier this also moves the
`variant-analysis-processor.test.ts` to the pure tests since it doesn't
and shouldn't depend on any `vscode` APIs.
We're adding both remote query history items and variant analysis history
items to the query history.
We've introduced a little method to shuffle the query history list
before we run our tests so that we don't accidentally write tests that
depend on a fixed order.
The query history now has increased test coverage for:
- handling an item being clicked
- removing and selecting the next item in query history
- handling single / multi selection
- showing the item results
While we're here we're also:
1. Adding a factory to generate variant analysis history items
2. Providing all fields for remote query history items and ordering them
according to their type definition order. At least one field (`queryId`)
was missing from our factory, which we will need to make the tests work
with remote queries.
There are a couple of tests that check whether we can correctly
compare two local queries.
These shouldn't be applied to remote queries [1] so let's just
make that a bit clearer by moving them into a local queries describe
block and using the `localHistory` array to choose items to compare
instead of the `allHistory` array.
[1]: bf1e3c10db/extensions/ql-vscode/src/query-history.ts (L1311-L1314)
At the moment our query history tests are set up to only check
local queries.
Let's prepare the ground to introduce remote query history items
and variant analysis history items.
This will allow us to expand test coverage for these other types
of items.
The `createGist` functionw was part of `gh-actions-api-client`, while it
didn't actually involve anything related to the GitHub Actions API. This
moves it to the non-Actions-specific `gh-api-client` module.
Another candidate for moving to `gh-api-client` is
`getRepositoriesMetadata`, but that one is a bit more involved since it
uses `showAndLogErrorMessage`, so depends on the `vscode` module. This
means it would not be possible to test in the "pure" tests and we would
need to move all our `gh-actions-api` tests to the integration tests. It
will not be used for variant analysis queries anymore, so I don't think
it's worth moving or refactoring to not depend on `vscode`.
This will hook up the "View logs" link to make it open the variant
analysis actions workflow run. The method for creating the actions
workflow run URL has been extracted from the query history to make it
callable without a history item.
This adds the `controllerRepo` field to the `VariantAnalysis` shared
type. This is technically a breaking change since the old history won't
have this field and all calls on this will fail. However, the feature
is not available so this should be fine.
The variant analysis view would allow expanding the results when the
repo task was completed. However, it did not take into account whether
the results were actually downloaded. This will that by usign the
download status when the repo task was succeeded and sending the repo
states to the view on load.
This adds a new file `repo_states.json` which tracks the download status
of all repositories of a variant analysis. We will write this file when
a download has completed and skip a repository download if the repo
state is marked as `succeeded`. This should prevent duplicate downloads.
This will still queue all repositories, even those which have already
been downloaded. However, I expect the actual cost in the download
method to be negligible since it's just an in-memory check.
This wil remove the discrepancy between the files on which ESLint is run
when `lint-staged` is used and the files that are checked using
`npm run lint` and `npm run format`.
It will now also include the `.storybook` directory which was previously
excluded from the ESLint configuration.
This will make the creation of a webview panel async to allow the
`getPanelConfig` method to be an async function. This will allow us to
do some work (like retrieving the variant analysis) in the
`getPanelConfig` method.
This will close the variant analysis view when the corresponding variant
analysis history item is deleted from the query history. This required
some extra code to handle `dispose` being called on the view to ensure
this actually disposes the panel, but we can now call `dispose()` on the
view to close it.
This will change the pure tests Mocha setup to actually use the
`tsconfig.json` located in the `test` directory. Before, it was using
the root-level `tsconfig.json`. To ensure we are still using mostly the
same settings, this will extend the `test/tsconfig.json` from the
root-level `tsconfig.json`.
This splits the mock GitHub API server class into two parts: one for the
interactive, VSCode parts and one for the non-VSCode parts. This allows
us to use the non-VSCode part in tests.
This adds some basic integration tests for MRVA using the GitHub mock
API server. It only does basic assertions and still needs to stub some
things because it is quite hard to properly test things since VSCode
does not expose an API to e.g. answer quick pick pop-ups.
I'm not sure how useful these integration tests will actually be in
practice, but they do at least ensure that we are able to successfully
submit a variant analysis.
We've merged https://github.com/github/vscode-codeql/pull/1656
which actually implements item removal. We'll need to change our
tests to account for this.
We've also merged https://github.com/github/vscode-codeql/pull/1654
which implements opening the view when we click on a variant analysis
history item. So we've changed our tests to take into account that
there's now a `showView` method being called.
We will need to set up some VariantAnalysisHistoryItem types in order
to use them in our tests.
We're repeating what we've done for RemoteQueryHistoryItem for now.
Separately we'll think about setting up tests that check for both
remote queries and variant analysis in the query history.
At the moment we'd like to focus on just adding some test coverage
for variant analysis history items.
Co-authored-by: Nora Scheuch <norascheuch@github.com>
At the moment we create the results manager as a private property on the `VariantAnalysisManager`.
If we instead created it at the extension level and passed it to the `VariantAnalysisManager`, we would have more freedom to write unit tests for the `VariantAnalysisManager` without needing to reach into a private results manager property.
We had previously added a no-op placeholder for when we attempt
to remove a variant analysis from our query history.
This adds the implementation:
- removes the item from the query history
- cleans up any existing result files attached to the variant analysis
NB: The remote queries would store all their results in a single folder.
For variant analysis, we store results per repo. The folder names are build
using a cache key and are stored in `cachedResults`. The cache key is
built from the variant analysis id and the repo name.
In order to delete the results, we've had to pass in the full variant analysis
object to the manager and call `cacheResults.delete()` for each of its scanned
repos.
Co-authored-by: Charis Kyriakou <charisk@github.com>
Co-authored-by: Nora Scheuch <norascheuch@github.com>
msw doesn't seem to support binary responses because it decodes them to
a UTF-8 string. To work around that, we will do a separate fetch of the
file and save that.
When the mock GitHub API server setting was moved to the top-level, we
forgot the comamnds in the `package.json`. This updates the commands to
have the correct visibility.
See: https://github.com/github/vscode-codeql/pull/1643
This adds a linter for JSON scenario files which will validate the JSON
files in the scenarios directory against the TypeScript types. It will
convert the TypeScript types to JSON schema to simplify this process.
Unfortunately, this will not currently allow adding scenarios with
failing requests since the types do not allow this. Rather than removing
this validation, we should fix the types. This can be done in a follow-up
PR.
The command lint expects all command palette commands to have a common
prefix which these violated. So, I've moved them to being a scoped
command so we can have different lints.
If the workspace is restarted while databases are being loaded, this
change prevents any from being lost.
The bug was that each time a database was added when rehydrating a db
from persisted state on startup, the persisted db list
was being updated. Instead of updating the list each time we add a db,
on restart, instead update the persisted list only after all are added.
Note that we need to update the persisted list after reading it in since
the act of rehydrating a database _may_ change its persisted state.
For example, the primary language of the database may be initialized
if it was not able to be determined originally.
* Fix missing DIL for new query server
* Fix DIL error message when QLO was not expected.
* Update extensions/ql-vscode/src/run-queries-shared.ts
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
This adds a new class which will setup the MSW server to record requests,
save them to memory and save them to files when calling a separate save
method.
This adds a Storybook add-on that allows you to switch between VSCode
theme. It follows the pattern of the [outline](https://github.com/storybookjs/storybook/tree/v6.5.12/addons/outline/src)
and [backgrounds](https://github.com/storybookjs/storybook/tree/v6.5.12/addons/backgrounds)
add-ons.
Unfortunately, it doesn't apply the CSS to just the elements it should
be applied to, but globally to the complete preview. This is a limitation
of using CSS files rather than setting inline styles on the elements. We
might be able to resolve this in the future by extracting the CSS
variables from the CSS files, but this is somewhat more involved.
- Avoid Installing `xvfb` since it is already available.
- Ensure `supportsNewQueryServer()` takes the CLI version into account
- Always run the new query server tests on v2.11.1 and later
- Avoid printing directory contents in `run-remote-query-tests`
- Run tests with `--disable-workspace-trust` to avoid a non-fatal error
being thrown from the dialog service.
- Ensure the exit code of the extension host while running integration
tests is the exit code of the actual process. Otherwise, there is
a possibility that an error exit code is swallowed up and ignored.
- Remove a duplicate unhandledRejection handler.
- Handle Exit code 7 from windows. This appears to be a failure on
exiting and unrelated to the tests.
- Fix handling of configuration in tests:
1. It is not possible to update a configuration setting for internal
settings like `codeql.canary`.
2. On windows CI, updating configuration after global teardown. So,
on tests avoid resetting test configuration when tests are over.
Also, I tried to remove all those pesky errors in the logs like:
> [2094:1017/235357.424002:ERROR:bus.cc(398)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix")
I was following advice from here, but I can't get it working.
- https://github.com/microsoft/vscode-test/issues/127
- https://github.com/electron/electron/issues/31981
This adds a documented way to change the theme in Storybook from the
VSCode Dark+ theme to the VSCode Light+ theme. It requires multiple
changes to two files, but these are all quite simple and it has been
documented on the "Overview" page.
Previously we were only checking whether we're triggering the download
command in the extension.
Now we're mocking `autoDownloadVariantAnalysisResult` on the
variantAnalysisManager and checking that it's being called for all repos
that have available results.
Before we make any changes, let's extract some of the monitor code into
smaller methods.
Since we have test coverage, we're able to do this quite comfortably.
We added a `successful` property to serialized local queries. But, this
property does not exist on older serialized queries. This change ensures
older queries get a `successful` property when deserialized.
The threshold at which the bad join order detection reports a warning was previously hard-coded to 50. Initial feedback from internal QL developers suggests that this is too high, and should be configurable in any case. I've made it configurable via the `codeQL.logInsights.joinOrderWarningThreshold` setting, leaving the default at 50. Once we get more feedback about what a better default value is, I'll update the default.
This adds tests for the duration calculation and moves it down a
component to make this easier. Adding tests for the
`VariantAnalysisHeader` would require constructing a complete variant
analysis object, while this is now just a simple unit test.
This will add some new date fields that have been added in the API to
the variant analysis types and factories. They are stored as strings
since storing them as `Date` would make the types inconsistent if they
are serialized to JSON (`JSON.stringify` -> `JSON.parse` would result
in strings rather than dates).
The `text` property is already nested under `query`, so it's redundant
to prefix it with `query`. This also makes it consistent with the other
properties.
This will implement the final step of opening the query text. Inside
the webview, this will send the message to the extension host to open
the query text.
This will add a new text document content provider for showing variant
analyses. This is separate from the remote queries content provider
to allow this to evolve separately. It also retrieves the query text
from the manager rather than passing the text directly to prevent the
webview from opening a tab with arbitrary content.
See: 4c527a3573/extensions/ql-vscode/src/extension.ts (L1242-L1257)
This will add a new query text field to the variant analysis submission,
which will also propagate to the variant analysis itself. This will
allow us to show the query text on the variant analysis page.
This will register all settings for which a `Setting` instance is
created as settings which will be reset. This should make it less
error-prone to change settings in tests.
The `vscode-test` package was renamed to `@vscode/test-electron` in
December of last year. This commit updates the extension to use the new
package name.
The reason for this change is that the `vscode-test` package was
somewhat flaky in actually starting VSCode to run the tests from the
command line. The new package also has some bugfixes and other
improvements which would normally have been part of a new version of the
`vscode-test` package.
This makes it possible to open the query file in the editor when
clicking on the query filename.
This is a slightly different implementation from the remote queries
implementation. The remote queries implementation will send the file
path to open to the extension host, and the extension host will simply
open the given file path. If someone is able to inject JavaScript into
the webview, this would allow them to open an arbitrary file in VSCode.
By moving the file path logic to the extension host, we can ensure that
we only allow opening the actual query file.
* QueryServer: Add support for new query-server
* Add a new canary flag to enable new query server support
* Add evaluation results to query object
Ensures better backwards compatibility with legacy query objects.
* Fix query server command name
* Add log message for new query server
* Use only legacy results
Co-authored-by: alexet <alexet@semmle.com>
This will implement ebba9949a8
and d18e3dd40e
for the `Compare` and `RemoteQueries` views. These should not be
impacted in the same way as the `VariantAnalysis` view, but this will
make them consistent and more resilient to future changes.
This cleanup function would never be called in normal operation, but if
we do decide to add a dependency to this `useEffect`, this will ensure
that only one listener is registered at a time.
When the variant analysis view was being rerendered, we were also
reregistering the message listeners, while not deregistering the old
ones. This resulted in a loop of message listeners being registered,
and the variant analysis being rerendered every time a message was
received by one of the listeners. This will ensure that the listener
is only registered once to prevent this from happening.
To make debugging the view easier and prevent needing to run a variant
analysis for each change, this will add a simple command which opens a
variant analysis by its ID. This it not intended to be visible to users
at any point.
Now that we're unzipping results, we also have to use something closer
to a zip file when testing download functionality for the
`variantAnalysisManager`.
The `variantAnalysisManager` has access to the
`variantAnalysisResultsManager` so we could've stubbed the result
manager's `download` method instead of going as far as using a zip
fixture.
However, since the results manager is private it seems bad to make it
public in order to stub one of its methods.
So using realistic data in the setup seems like a good compromise.
This will:
- download a zip file as an ArrayBuffer
- save the file as `results.zip`
- unzip the contents into a `results/` folder
For the tests:
- In order to check whether we're saving the correct files in the tests,
we've had to make the `getRepoStorageDirectory` method public.
Unfortunately the temporary file path generated for tests is random so
we're not able to hardcode it.
- Now that we have a real zip file to use in our tests, we're first
converting this file into an ArrayBuffer, then stubbing the API to
return it. We then check that it's saved and unzipped correctly.
This matches what type of file we'd expect in real life: a zip file
containing a sarif file.
We've copied an example `results.sarif` file from other tests in the
`no-workspace` folder.
We expect this method to return a zip file which can be typed to an
`ArrayBuffer`. In the following commits we'll read this buffer and save it
as a zip file.
This class will be used to set test config values for the tests. It is
able to set the config value to a specified value for every test and
restore the value to the original value after the test.
Instead of using the `glob` library and a custom promise, this will use
`glob-promise` which is used by other parts of the codebase already.
This reduces the amount of code which manually needs to call `reject`
and makes it easier to read.
When we first submit the variant analysis for processing, we'd like to update
the query history panel.
At the moment we're just adding the setup for triggering the event. In a future
PR we'll consume this event and change the query history panel accordingly.
In order for this to happen we will need to introduce a new `VariantAnalysisHistoryItem`
type which will massage the data we get from the API into a type which the Query
History panel can consume.
Co-authored-by: Shati Patel <shati-patel@github.com>
When the `viewLoaded` message is received by the view, it will now
retrieve the variant analysis from the manager and send it to the
view. This will allow the view to display the variant analysis.
This will change tests that are using a mocked `CancellationTokenSource`
to use a real `CancellationTokenSource` instead. Tests are run inside
VSCode, so we can use these without mocking.
To create the interpreted and raw results from the SARIF/BQRS files, we
need some information from the repo task object. This will store the
repo task object to the filesystem as JSON so we can read them when
loading results.
In most cases, we will not have access to the full repo task object
since this needs to be retrieved from the API. Since we are only using
the full name from the repo task object, we can just use the full name
instead.
This will store all variant analysis that are run in the manager. Right
now, it only stores the variant analyses in memory. In the future, these
will be loaded from the query history and can be restored after a
restart.
We register a handler for the old command ID, but do not mention it in package.json.
This seems to be backward compatible without polluting the command palette.
This adds a new variant analysis results manager which is responsible
for downloading and loading variant analysis results to/from the
filesystem. It is essentially the `AnalysesResultsManager` modified to
suit the variant analysis results.
All fields in the variant analysis skipped repositories are optional,
but this was not properly defined in the API types. This will correct
the types and the functions processing the data such that they handle
non-existing fields.
To be able to send messages to the open view for a variant analysis, we
need to have a reference to the view. This is done by keeping track of
all open views in a dictionary indexed by their variant analysis ID.
We currently only allow one view per variant analysis, but do allow
multiple variant analysis views to be open at a time. In the future, we
may want to allow multiple views per variant analysis (such that e.g.
"Split right" works), but this is not supported yet.
The reason for the indirection through the interfaces is to prevent
circular dependencies between the variant analysis view and the manager.
This won't have an `id` field. We initially generated this the same
way we did for all other skipped repos, but this one is special because
it's only providing the fullName field, while the others also provide
`id` and `private`.
This introduces a new `autoDownloadVariantAnalysisResult` command which
will be called by the VariantAnalysisMonitor every time it detects a new
repo has been scanned.
In turn, this will use the `autoDownloadVariantAnalysisResult` method
which we defined in an earlier commit on the VariantAnalysisManager.
This method will be called from the VariantAnalysisMonitor once
a new repo has been scanned.
It will then perform an API request to get the repo task for it,
which will contain an `artifact_url`.
Finally it will use the API method we introduced in the previous commit
to download the result for the repo and then save it on disk.
In a previous PR [1] we introduced factories for generating variant analyses
(and their associated objects) that were returned from the API.
Let's also introduce factories for generating their VSCode equivalent.
We can immediately use them for generating a VariantAnalysis object for the
monitor tests.
[1]: https://github.com/github/vscode-codeql/pull/1545
This is a follow-up to clean up the skipped and analyzed repository
component duplication. The rows in both tabs are very similar, so this
will combine them to use a single component.
This will open the variant analysis view after the variant analysis has
been submitted. It will also show a notification that the analysis has
been submitted, which includes the query name.
This implements persistence for the variant analysis webview, allowing
the webview panel to be restored when VSCode is restarted. It's probably
easier to add this now than to try to add it later.
The basic idea is that there are no real differences when opening the
webview for the first time. However, when VSCode is restarted it will
use the `VariantAnalysisViewSerializer` to restore the webview panel.
In our case this means recreating the `VariantAnalysisView`.
To fully test this, I've also added a mock variant analysis ID as the
state of the webview. This value is now randomly generated when calling
the `codeQL.mockVariantAnalysisView` command. This allows us to test
opening multiple webviews and that the webviews are restored with the
correct state.
See: https://code.visualstudio.com/api/extension-guides/webview#persistence
At the moment we're only able to send one of:
- repositories
- repositoryLists
- repositoryOwners
In the future, we intend to be able to send a combination of these
but at the moment the API will only ever allow you to send one.
So let's be consistent and just send `repositories` here.
Currently, when running a query which produces raw results, we will show
all repositories, even if they do not have any results. This change will
ensure that we are only showing repositories which have results. This
matches the behavior for queries which produce interpreted results.
The `controllerRepo` parameter was being encoded/escaped by Octokit,
resulting in a URL like
`repos/dsp-testing%2Fqc-controller/code-scanning/codeql/queries` rather
than `repos/dsp-testing/qc-controller/code-scanning/codeql/queries`.
This switches it to use the ID instead, since we already have the ID
and do not have access to the owner and repo separately anymore.
Now that we have a monitor, we expect the variant analysis to return
a list of scanned repos.
Let's re-use our previous factory for creating mocked responses to
get a dummy variant analysis with scanned repos.
In a previous commit we were submitting a variant analysis to the API
and then triggering a `monitorVariantAnalysis` command.
Here we're hooking up the command to the VariantAnalysisMonitor class.
This will poll the API every 5 seconds for changes to the variant
analysis. By default it will continue to run for a maximum of 2 days,
or when the user closes VSCode.
The monitor will receive a variantAnalysis summary from the API that
will contain an up-to-date list of scanned repos.
The monitor will then return a list of scanned repo ids.
In a future PR we'll add the functionality to:
- update the UI for in progress/completed states
- raise error on timeout
- download the results
So that we're able to:
- set the status value
- build scanned and skipped repos by default
For previous tests, we needed to perform checks on scanned & skipped
repos so we needed to build them outside of this method. When we re-use
this method for the VariantAnalysisMonitor, we will just need a generic
ApiResponse so we can create these repos inside the method.
We're going to need some of these methods to generate a valid VariantAnalysis.
We might as well extract them from the tests for the VariantAnalysisProcessor.
Once we submit a variant analysis and get our response from the API,
we'd like to set up a way to monitor the variant analysis as it starts
producing live results.
Here we're using a VSCode command to trigger a monitoring process which
will poll the API for changes.
The `RawResultsTable` was using inline styles, while we should prefer
to use styled components. This refactors it to use styled components and
also improves some other miscelleanous things (extracting the props to
a separate type and moving the `Cell` above the `Row` since the latter
uses the former).
This adds the analyzed repositories component for showing within the
"Analyzed" tab. I wasn't completely sure whether there should be a
difference between "Pending" and "In progress", but pending will now not
show an icon, while in progress will show a spinner.
For the collapsible items, it does not reuse the `CollapsibleItem`
component because that component is tightly coupled with the styles
of the remote queries component.
This creates the component for showing the outcome panels. It does not
implement the content of each individual panel; it only implements the
tabs, panel views, and the general warnings.
I came across this when I had a query that threw an error while running
for unrelated reasons. At this point, the query results were in a bad
state, but this caused `safeMax` to be called with `undefined` and
it prevented the extension from starting. This changed fixed the error.
This involved changing a few different methods to take a Repository object
instead of taking owner and repo separately. Overall I think this is a good change.
This will add Storybook stories for the error, success, and warning
icons, as well as for the generic `Codicon` component.
To show the available icons for the `Codicon` component, a static JSON
list is generated from the contents of a CSV file included as part of
the `@vscode/codicons` npm package. The command to regenerate the file
is included in the story.
This will change the VariantAnalysisHeader to take the VariantAnalysis
domain model instead of a large amount of props.
It also adds the `canceled` status to the `VariantAnalysisStatus` to
represent a stopped variant analysis.
The "Export All" button was always exporting the selected query, while a
different query could be open in a VSCode panel. This will ensure that
the query ID is passed to the export function, so that the correct query
is exported.
This refactors the CodePaths and FileCodeSnippet components to be more
readable and in style with the rest of the "new" components. It does the
following:
- Remove uses of the `style` and `sx` props; replace it by using
`styled-components` instead
- Remove uses of Primer icons
- Split out the components into multiple files
- Change the colors of the severity to match VSCode colors (and make
them themable)
I haven't removed the use of the Primer `Overlay` component yet, since
this component seems to do quite a lot and the VSCode WebView UI Toolkit
doesn't have a replacement for it.
@@ -67,10 +67,7 @@ members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
This Code of Conduct is adapted from the [Contributor Covenant, version 1.4](https://www.contributor-covenant.org/version/1/4/code-of-conduct.html).
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq
[the Contributor Covenant FAQ](https://www.contributor-covenant.org/faq). For more about Contributor Covenant, see [the Contributor Covenant website](https://www.contributor-covenant.org).
Hi there! We're thrilled that you'd like to contribute to this project. Your help is essential for keeping it great.
@@ -23,7 +23,9 @@ Please note that this project is released with a [Contributor Code of Conduct][c
Here are a few things you can do that will increase the likelihood of your pull request being accepted:
* Follow the [style guide][style].
* Write tests. Tests that don't require the VS Code API are located [here](extensions/ql-vscode/test). Integration tests that do require the VS Code API are located [here](extensions/ql-vscode/src/vscode-tests).
* Write tests:
* [Tests that don't require the VS Code API are located here](extensions/ql-vscode/test).
* [Integration tests that do require the VS Code API are located here](extensions/ql-vscode/src/vscode-tests).
* Keep your change as focused as possible. If there are multiple changes you would like to make that are not dependent upon each other, consider submitting them as separate pull requests.
* Write a [good commit message](https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html).
@@ -54,7 +56,7 @@ Alternatively, you can build the extension within VS Code via `Terminal > Run Bu
Before running any of the launch commands, be sure to have run the `build` command to ensure that the JavaScript is compiled and the resources are copied to the proper location.
We recommend that you keep `npm run watch` running in the backgound and you only need to re-run `npm run build` in the following situations:
We recommend that you keep `npm run watch` running in the background and you only need to re-run `npm run build` in the following situations:
1. on first checkout
2. whenever any of the non-TypeScript resources have changed
@@ -91,99 +93,9 @@ Alternatively, you can start Storybook inside of VSCode. There is a VSCode launc
More information about Storybook can be found inside the **Overview** page once you have launched Storybook.
### Running the unit tests and integration tests that do not require a CLI instance
### Testing
Unit tests and many integration tests do not require a copy of the CodeQL CLI.
Outside of vscode, in the `extensions/ql-vscode` directory, run:
```shell
npm run test&& npm run integration
```
Alternatively, you can run the tests inside of vscode. There are several vscode launch configurations defined that run the unit and integration tests. They can all be found in the debug view.
Only the _With CLI_ tests require a CLI instance to run. See below on how to do that.
Running from a terminal, you _must_ set the `TEST_CODEQL_PATH` variable to point to a checkout of the `github/codeql` repository. The appropriate CLI version will be downloaded as part of the test.
### Running the integration tests
You will need to run CLI tests using a task from inside of VS Code called _Launch Integration Tests - With CLI_.
The CLI integration tests require the CodeQL standard libraries in order to run so you will need to clone a local copy of the `github/codeql` repository.
From inside of VSCode, open the `launch.json` file and in the _Launch Integration Tests - With CLI_ task, uncomment the `"${workspaceRoot}/../codeql"` line. If necessary, replace value with a path to your checkout, and then run the task.
## Releasing (write access required)
1. Double-check the `CHANGELOG.md` contains all desired change comments and has the version to be released with date at the top.
* Go through all recent PRs and make sure they are properly accounted for.
* Make sure all changelog entries have links back to their PR(s) if appropriate.
1. Double-check that the node version we're using matches the one used for VS Code. If it doesn't, you will then need to update the node version in the following files:
*`.nvmrc` - this will enable `nvm` to automatically switch to the correct node version when you're in the project folder
*`.github/workflows/main.yml` - all the "node-version: <version>" settings
*`.github/workflows/release.yml` - the "node-version: <version>" setting
1. Double-check that the extension `package.json` and `package-lock.json` have the version you intend to release. If you are doing a patch release (as opposed to minor or major version) this should already be correct.
1. Create a PR for this release:
* This PR will contain any missing bits from steps 1 and 2. Most of the time, this will just be updating `CHANGELOG.md` with today's date.
* Create a new branch for the release named after the new version. For example: `v1.3.6`
* Create a new commit with a message the same as the branch name.
* Create a PR for this branch.
* Wait for the PR to be merged into `main`
1. Switch to `main` and add a new tag on the `main` branch with your new version (named after the release), e.g.
```bash
git checkout main
git tag v1.3.6
```
If you've accidentally created a badly named tag, you can delete it via
```bash
git tag -d badly-named-tag
```
1. Push the new tag up:
a. If you're using a fork of the repo:
```bash
git push upstream refs/tags/v1.3.6
```
b. If you're working straight in this repo:
```bash
git push origin refs/tags/v1.3.6
```
This will trigger [a release build](https://github.com/github/vscode-codeql/releases) on Actions.
* **IMPORTANT** Make sure you are on the `main` branch and your local checkout is fully updated when you add the tag.
* If you accidentally add the tag to the wrong ref, you can just force push it to the right one later.
1. Monitor the status of the release build in the `Release` workflow in the Actions tab.
1. Download the VSIX from the draft GitHub release at the top of [the releases page](https://github.com/github/vscode-codeql/releases) that is created when the release build finishes.
1. Unzip the `.vsix` and inspect its `package.json` to make sure the version is what you expect,
or look at the source if there's any doubt the right code is being shipped.
1. Install the `.vsix` file into your vscode IDE and ensure the extension can load properly. Run a single command (like run query, or add database).
1. Go to the actions tab of the vscode-codeql repository and select the [Release workflow](https://github.com/github/vscode-codeql/actions?query=workflow%3ARelease).
- If there is an authentication failure when publishing, be sure to check that the authentication keys haven't expired. See below.
1. Approve the deployments of the correct Release workflow. This will automatically publish to Open VSX and VS Code Marketplace.
1. Go to the draft GitHub release in [the releases tab of the repository](https://github.com/github/vscode-codeql/releases), click 'Edit', add some summary description, and publish it.
1. Confirm the new release is marked as the latest release at <https://github.com/github/vscode-codeql/releases>.
1. If documentation changes need to be published, notify documentation team that release has been made.
1. Review and merge the version bump PR that is automatically created by Actions.
## Secrets and authentication for publishing
Repository administrators, will need to manage the authentication keys for publishing to the VS Code marketplace and Open VSX. Each requires an authentication token. The VS Code marketplace token expires yearly.
To regenerate the Open VSX token:
1. Log in to the [user settings page on Open VSX](https://open-vsx.org/user-settings/namespaces).
1. Make sure you are a member of the GitHub namespace.
1. Go to the [Access Tokens](https://open-vsx.org/user-settings/tokens) page and generate a new token.
1. Update the secret in the `publish-open-vsx` environment in the project settings.
To regenerate the VSCode Marketplace token, please see our internal documentation. Note that Azure DevOps PATs expire every 90 days and must be regenerated.
[Information about testing can be found here](./docs/testing.md).
@@ -7,7 +7,7 @@ The extension is released. You can download it from the [Visual Studio Marketpla
To see what has changed in the last few versions of the extension, see the [Changelog](https://github.com/github/vscode-codeql/blob/main/extensions/ql-vscode/CHANGELOG.md).
[](https://github.com/github/vscode-codeql/actions?query=workflow%3A%22Build+Extension%22+branch%3Amaster)
@@ -15,11 +15,19 @@ To see what has changed in the last few versions of the extension, see the [Chan
* Shows the flow of data through the results of path queries, which is essential for triaging security results.
* Provides an easy way to run queries from the large, open source repository of [CodeQL security queries](https://github.com/github/codeql).
* Adds IntelliSense to support you writing and editing your own CodeQL query and library files.
* Supports you running CodeQL queries against thousands of repositories on GitHub using multi-repository variant analysis.
## Project goals and scope
This project will track new feature development in CodeQL and, whenever appropriate, bring that functionality to the Visual Studio Code experience.
## Dependencies
This extension depends on the following two extensions for required functionality. They will be installed automatically when you install VS Code CodeQL.
The CodeQL for VS Code extension defines the version of Node.js that it is intended to run with. This Node.js version is used when running most CI and unit tests.
When running in production (i.e. as an extension for a VS Code application) it will use the Node.js version provided by VS Code. This can mean a different Node.js version is used by different users with different versions of VS Code.
We should make sure the CodeQL for VS Code extension works with the Node.js version supplied by all versions of VS Code that we support.
## Checking the version of Node.js supplied by VS Code
You can find this info by seleting "About Visual Studio Code" from the top menu.

## Updating the Node.js version
The following files will need to be updated:
-`extensions/ql-vscode/.nvmrc` - this will enable nvm to automatically switch to the correct Node
version when you're in the project folder. It will also change the Node version the GitHub Actions
workflows use.
-`extensions/ql-vscode/package.json` - the "engines.node: '[VERSION]'" setting
-`extensions/ql-vscode/package.json` - the "@types/node: '[VERSION]'" dependency
Then run `npm install` to update the `extensions/ql-vscode/package-lock.json` file.
## Node.js version used in tests
Unit tests will use whatever version of Node.js is installed locally. In CI this will be the version specified in the workflow.
Integration tests download a copy of VS Code and then will use whatever version of Node.js is provided by VS Code. Our integration tests are currently pinned to an older version of VS Code. See [VS Code version used in tests](./vscode-version.md#vs-code-version-used-in-tests) for more information.
1. Determine the new version number. We default to increasing the patch version number, but make our own judgement about whether a change is big enough to warrant a minor version bump. Common reasons for a minor bump could include:
* Making substantial new features available to all users. This can include lifting a feature flag.
* Breakage in compatibility with recent versions of the CLI.
* Minimum required version of VS Code is increased.
* New telemetry events are added.
* Deprecation or removal of commands.
* Accumulation of many changes, none of which are individually big enough to warrant a minor bump, but which together are. This does not include changes which are purely internal to the extension, such as refactoring, or which are only available behind a feature flag.
1. Create a release branch named after the new version (e.g. `v1.3.6`):
* For a regular scheduled release this branch will be based on latest `main`.
* Make sure your local copy of `main` is up to date so you are including all changes.
* To do a minimal bug-fix release, base the release branch on the tag from the most recent release and then add only the changes you want to release.
* Choose this option if you want to release a specific set of changes (e.g. a bug fix) and don't want to incur extra risk by including other changes that have been merged to the `main` branch.
1. Run the ["Run CLI tests" workflow](https://github.com/github/vscode-codeql/actions/workflows/cli-test.yml) and make sure the tests are green.
* You can skip this step if you are releasing from `main` and there were no merges since the most recent daily scheduled run of this workflow.
1. Double-check the `CHANGELOG.md` contains all desired change comments and has the version to be released with date at the top.
* Go through PRs that have been merged since the previous release and make sure they are properly accounted for.
* Make sure all changelog entries have links back to their PR(s) if appropriate.
1. Double-check that the extension `package.json` and `package-lock.json` have the version you intend to release. If you are doing a patch release (as opposed to minor or major version) this should already be correct.
1. Commit any changes made during steps 4 and 5 with a commit message the same as the branch name (e.g. `v1.3.6`).
1. Open a PR for this release.
* The PR diff should contain:
* Any missing bits from steps 4 and 5. Most of the time, this will just be updating `CHANGELOG.md` with today's date.
* If releasing from a branch other than `main`, this PR will also contain the extension changes being released.
1. Build the extension using `npm run build` and install it on your VS Code using "Install from VSIX".
1. Go through [our test plan](./test-plan.md) to ensure that the extension is working as expected.
1. Create a new tag on the release branch with your new version (named after the release), e.g.
```bash
git tag v1.3.6
```
1. Merge the release PR into `main`.
* If there are conflicts in the changelog, make sure to place any new changelog entries at the top, above the section for the current release, as these new entries are not part of the current release and should be placed in the "unreleased" section.
* The release PR must be merged before pushing the tag to ensure that we always release a commit that is present on the `main` branch. It's not required that the commit is the head of the `main` branch, but there should be no chance of a future release accidentally not including changes from this release.
1. Push the new tag up:
```bash
git push origin refs/tags/v1.3.6
```
1. Find the [Release](https://github.com/github/vscode-codeql/actions?query=workflow%3ARelease) workflow run that was just triggered by pushing the tag, and monitor the status of the release build.
* DO NOT approve the "publish" stages of the workflow yet.
1. Download the VSIX from the draft GitHub release at the top of [the releases page](https://github.com/github/vscode-codeql/releases) that is created when the release build finishes.
1. Unzip the `.vsix` and inspect its `package.json` to make sure the version is what you expect,
or look at the source if there's any doubt the right code is being shipped.
1. Install the `.vsix` file into your vscode IDE and ensure the extension can load properly. Run a single command (like run query, or add database).
1. Approve the deployments of the [Release](https://github.com/github/vscode-codeql/actions?query=workflow%3ARelease) workflow run. This will automatically publish to Open VSX and VS Code Marketplace.
* If there is an authentication failure when publishing, be sure to check that the authentication keys haven't expired. See below.
1. Go to the draft GitHub release in [the releases page](https://github.com/github/vscode-codeql/releases), click 'Edit', add some summary description, and publish it.
1. Confirm the new release is marked as the latest release.
1. If documentation changes need to be published, notify documentation team that release has been made.
1. Review and merge the version bump PR that is automatically created by the Release workflow.
## Secrets and authentication for publishing
Repository administrators, will need to manage the authentication keys for publishing to the VS Code marketplace and Open VSX. Each requires an authentication token. The VS Code marketplace token expires yearly.
To regenerate the Open VSX token:
1. Log in to the [user settings page on Open VSX](https://open-vsx.org/user-settings/namespaces).
1. Make sure you are a member of the GitHub namespace.
1. Go to the [Access Tokens](https://open-vsx.org/user-settings/tokens) page and generate a new token.
1. Update the secret in the `publish-open-vsx` environment in the project settings.
To regenerate the VSCode Marketplace token, please see our internal documentation. Note that Azure DevOps PATs expire every 90 days and must be regenerated.
This document describes the manual test plan for the QL extension for Visual Studio Code.
The plan will be executed manually to start with but the goal is to eventually automate parts of the process (based on
effort vs value basis).
## What this doesn't cover
We don't need to test features (and permutations of features) that are covered by automated tests.
## Before releasing the VS Code extension
- Run at least one local query and MRVA using the existing version of the extension (to generate "old" query history items).
- Go through the required test cases listed below.
- Check major PRs since the previous release for specific one-off things to test. Based on that, you might want to
choose to go through some of the Optional Test Cases.
## Required Test Cases
### Local databases
#### Test case 1: Download a database from GitHub
1. Click "Download Database from GitHub" and enter `angular-cn/ng-nice` and select the javascript language if prompted
#### Test case 2: Import a database from an archive
1. Obtain a javascript database for `babel/babel`
- You can do `gh api "/repos/babel/babel/code-scanning/codeql/databases/javascript" -H "Accept: application/zip" > babel.zip` to fetch a database from GitHub.
2. Click "Choose Database from Archive" and select the file you just downloaded above.
### Local queries
#### Test case 1: Running a path problem query and viewing results
1. Open the [javascript UnsafeJQueryPlugin query](https://github.com/github/codeql/blob/main/javascript/ql/src/Security/CWE-079/UnsafeJQueryPlugin.ql).
2. Select the `angular-cn/ng-nice` database (or download it if you don't have one already)
3. Run a local query.
4. Once the query completes:
- Check that the result messages are rendered
- Check that the paths can be opened and paths are rendered correctly
- Check that alert locations can be clicked on
#### Test case 2: Running a problem query and viewing results
1. Open the [javascript ReDoS query](https://github.com/github/codeql/blob/main/javascript/ql/src/Performance/ReDoS.ql).
2. Select the `babel/babel` database (or download it if you don't have one already)
3. Run a local query.
4. Once the query completes:
- Check that the result messages are rendered
- Check that alert locations can be clicked on
#### Test case 3: Running a non-problem query and viewing results
1. Open the [cpp FunLinesOfCode query](https://github.com/github/codeql/blob/main/cpp/ql/src/Metrics/Functions/FunLinesOfCode.ql).
2. Select the `google/brotli` database (or download it if you don't have one already)
3. Run a local query.
4. Once the query completes:
- Check that the results table is rendered
- Check that result locations can be clicked on
#### Test case 4: Can use AST viewer
1. Click on any code location from a previous query to open a source file from a database
2. Open the AST viewing panel and click "View AST"
3. Once the AST is computed:
- Check that it can be navigated
### MRVA
#### Test Case 1: Running a path problem query and viewing results
1. Open the [javascript UnsafeJQueryPlugin query](https://github.com/github/codeql/blob/main/javascript/ql/src/Security/CWE-079/UnsafeJQueryPlugin.ql).
2. Run a MRVA against the following repo list:
```json
{
"name": "test-repo-list",
"repositories": [
"angular-cn/ng-nice",
"apache/hadoop",
"apache/hive"
]
}
```
3. Check that a notification message pops up and the results view is opened.
4. Check the query history. It should:
- Show that an item has been added to the query history
- The item should be marked as "in progress".
5. Once the query starts:
- Check the results view
- Check the code paths view, including the code paths drop down menu.
- Check that the repository filter box works
- Click links to files/locations on GitHub
- Check that the query history item is updated to show the number of results
6. Once the query completes:
- Check that the query history item is updated to show the query status as "complete"
#### Test Case 2: Running a problem query and viewing results
1. Open the [javascript ReDoS query](https://github.com/github/codeql/blob/main/javascript/ql/src/Performance/ReDoS.ql).
2. Run a MRVA against the "Top 10" repositories.
3. Check that a notification message pops up and the results view is opened.
4. Check the query history. It should:
- Show that an item has been added to the query history
- The item should be marked as "in progress".
5. Once the query completes:
- Check that the results are rendered with an alert message and a highlighted code snippet:
#### Test Case 3: Running a non-problem query and viewing results
1. Open the [cpp FunLinesOfCode query](https://github.com/github/codeql/blob/main/cpp/ql/src/Metrics/Functions/FunLinesOfCode.ql).
2. Run a MRVA against a single repository (e.g. `google/brotli`).
3. Check that a notification message pops up and the results view is opened.
4. Check the query history. It should:
- Show that an item has been added to the query history
- The item should be marked as "in progress".
5. Once the query completes:
- Check that the results show up in a table:

#### Test Case 4: Interacting with query history
1. Click a history item (for MRVA):
- Check that exporting results works
- Check that sorting results works
- Check that copying repo lists works
2. Click "Open Results Directory":
- Check that the correct directory is opened and there are results in it
3. Click "View Logs":
- Check that the correct workflow is opened
#### Test Case 5: Canceling a variant analysis run
Run one of the above MRVAs, but cancel it from within VS Code:
- Check that the query is canceled and the query history item is updated.
- Check that the workflow run is also canceled.
- Check that any available results are visible in VS Code.
### CodeQL Model Editor
#### Test Case 1: Opening the model editor
1. Download the `sofastack/sofa-jraft` java database from GitHub.
2. Open the Model Editor with the "CodeQL: Open CodeQL Model Editor" command from the command palette.
- Check that the editor loads and shows methods to model.
- Check that methods are grouped per library (e.g. `rocksdbjni@7.7.3` or `asm@6.0`)
- Check that the "Open source" link works.
- Check that the 'View' button works and the Method Usage panel highlight the correct method and usage
- Check that the Method Modeling panel shows the correct method and modeling state
#### Test Case 2: Model methods
1. Expand one of the libraries.
- Change the model type and check that the other dropdowns change.
- Check that the method modeling panel updates accordingly
2. Save the modeled methods.
3. Click "Open extension pack"
- Check that the file explorer opens a directory with a "models" directory
4. Open the ".model.yml" file corresponding to the library that was changed.
- Check that the file contains entries for the methods that were modeled.
#### Test Case 3: Model with AI
Note that this test requires the feature flag: `codeQL.model.llmGeneration`
1. Click "Model with AI".
- Check that rows change to "Thinking".
- Check that results come back and rows get filled out.
#### Test Case 4: Model as dependency
Note that this test requires the feature flag: `codeQL.model.flowGeneration`
1. Click "Model as dependency"
- Check that grouping are now per package (e.g. `com.alipay.sofa.rraft.option` or `com.google.protobuf`)
2. Click "Generate".
- Check that rows are filled out.
### General
#### Test case 1: Change to a different colour theme
Open at least one of the above MRVAs and at least one local query, then try changing to a different colour theme and check that everything looks sensible.
Are there any components that are not showing up?
## Optional Test Cases
### Modeling Flow
1. Check that a method can have multiple models:
- Add a couple of new models for one method in the model editor
- Save and check that the modeling file (use the 'open extension pack' button to open it) shows multiple methods
- Check that the Method Modeling Panel shows the correct multiple models
- Check that you can browse through different models in the Method Modeling Panel
- Check that a 'duplicated classification' error appears in both model editor and modeling panel when a duplicate modeling occurs
- Check that a 'conflicting classification' error appears when a neutral model type is paired with a model of the same kind
- Check that clicking on the error highlights the correct modeling in both the editor and the modeling panel
2. Check the Method Usage Panel
- Check that the Method Usage Panel opens and jumps to the correct usage when clicking on 'View' in the model editor
- Check that the first and following usages are opening when clicking on a usage
- Check that the usage icon color turns green when saving a newly modeled method
- Check that the usage icon color turns red when saving a newly unmodeld method
3. Check the Method Modeling Panel
- Check that the 'Start modeling' button opens a new model editor
- Check that it refreshes the blank state when a model editor is opened/closed
- Check that when modeling in the editor the modeling panel updates accordingly
- Check that when modeling in the modeling panel the model editor updates accordingly
### Selecting MRVA repositories to run on
#### Test case 1: Running a query on a single repository
1. When the repository exists and is public
1. Has a CodeQL database for the correct language
2. Has a CodeQL database for another language
3. Does not have any CodeQL databases
2. When the repository exists and is private
1. Is accessible and has a CodeQL database
2. Is not accessible
3. When the repository does not exist
#### Test case 2: Running a query on a custom repository list
1. The repository list is non-empty
1. All repositories in the list have a CodeQL database
2. Some but not all repositories in the list have a CodeQL database
3. No repositories in the list have a CodeQL database
2. The repository list is empty
#### Test case 3: Running a query on all repositories in an organization
1. The org exists
1. The org contains repositories that have CodeQL databases
2. The org contains repositories of the right language but without CodeQL databases
3. The org contains repositories not of the right language
4. The org contains private repositories that are inaccessible
2. The org does not exist
### Using different types of controller repos for MRVA
#### Test case 1: Running a query when the controller repository is public
1. Can run queries on public repositories
2. Can not run queries on private repositories
#### Test case 2: Running a query when the controller repository is private
1. Can run queries on public repositories
2. Can run queries on private repositories
#### Test case 3: Running a query when the controller repo exists but you do not have write access
1. Cannot run queries
#### Test case 4: Running a query when the controller repo doesn’t exist
1. Cannot run queries
#### Test case 5: Running a query when the "config field" for the controller repo is not set
1. Cannot run queries
### Query History
This requires running a MRVA query and viewing the query history.
The first test case specifies actions that you can do when the query is first run and is in "pending" state. We start
with this since it has quite a limited number of actions you can do.
#### Test case 1: When variant analysis state is "pending"
1. Starts monitoring variant analysis
2. Cannot open query history item
3. Can delete a query history item
1. Item is removed from list in UI
2. Files on dist are deleted (can get to files using "open query directory")
4. Can sort query history items
1. By name
2. By query date
3. By result count
5. Cannot open query directory
6. Can open query that produced these results
1. When the file still exists and has not moved
2. When the file does not exist
7. Cannot view logs
8. Cannot copy repository list
9. Cannot export results
10. Cannot select to create a gist
11. Cannot select to save as markdown
12. Cannot cancel analysis
#### Test case 2: When the variant analysis state is not "pending"
1. Query history is loaded when VSCode starts
2. Handles when action workflow was canceled while VSCode was closed
3. Can open query history item
1. Manually by clicking on them
2. Automatically when VSCode starts (if they were open when VSCode was last used)
4. Can delete a query history item
1. Item is removed from list in UI
2. Files on dist are deleted (can get to files using "open query directory")
5. Can sort query history items
1. By name
2. By query date
3. By result count
6. Can open query directory
7. Can open query that produced these results
1. When the file still exists and has not moved
2. When the file does not exist
8. Can view logs
9. Can copy repository list
1. Text is copied to clipboard
2. Text is a valid repository list
10. Can export results
11. Can select to create gist
1. A gist is created
2. The first thing in the gist is a summary
3. Contains a file for each repository with results
4. A popup links you to the gist
12. Can select to save as markdown
1. A directory is created on disk
2. Contains a summary file
3. Contains a file for each repository with results
4. A popup allows you to open the directory
#### Test case 3: When variant analysis state is "in_progress"
1. Starts monitoring variant analysis
1. Ready results are downloaded
2. Can cancel analysis
1. Causes the actions run to be canceled
#### Test case 4: When variant analysis state is in final state ("succeeded"/"failed"/"canceled")
1. Stops monitoring variant analysis
1. All results are downloaded if state is succeeded
2. Otherwise, ready results are downloaded, if any are available
2. Cannot cancel analysis
### MRVA results view
This requires running a MRVA query and seeing the results view.
<!-- markdownlint-disable-next-line MD024 -->
#### Test case 1: When variant analysis state is "pending"
1. Can open a results view
2. Results view opens automatically
- When starting variant analysis run
- When VSCode opens (if view was open when VSCode was closed)
3. Results view is empty
#### Test case 2: When variant analysis state is not "pending"
1. Can open a results view
2. Results view opens automatically
1. When starting variant analysis run
2. When VSCode opens (if view was open when VSCode was closed)
3. Can copy repository list
1. Text is copied to clipboard
2. Text is a valid repository list
4. Can export results
1. Only includes repos that you have selected (also see section from query history)
5. Can cancel analysis
6. Can open query file
1. When the file still exists and has not moved
2. When the file does not exist
7. Can open query text
8. Can sort repos
1. Alphabetically
2. By number of results
3. By popularity
9. Can filter repos
10. Shows correct statistics
1. Total number of results
2. Total number of repositories
3. Duration
11. Can see live results
1. Results appear in extension as soon as each query is completed
12. Can view interpreted results (i.e. for a "problem" query)
1. Can view non-path results
2. Can view code paths for "path-problem" queries
13. Can view raw results (i.e. for a non "problem" query)
1. Renders a table
14. Can see skipped repositories
1. Can see repos with no db in a tab
1. Shown warning that explains the tab
2. Can see repos with no access in a tab
1. Shown warning that explains the tab
3. Only shows tab when there are skipped repos
15. Result downloads
1. All results are downloaded automatically
2. Download status is indicated by a spinner (Not currently any indication of progress beyond "downloading" and "not downloading")
3. Only 3 items are downloaded at a time
4. Results for completed queries are still downloaded when
1. Some but not all queries failed
2. The variant analysis was canceled after some queries completed
#### Test case 3: When variant analysis state is in "succeeded" state
1. Can view logs
2. All results are downloaded
#### Test case 4: When variant analysis is in "failed" or "canceled" state
1. Can view logs
1. Results for finished queries are still downloaded.
### MRVA repositories panel
1. Add a list
1. Add a database at the top level
1. Add a database to a list
1. Add a the same database at a top-level and in a list
1. Delete a list
1. Delete a database from the top level
1. Delete a database from a list
1. Add an owner
1. Remove an owner
1. Rename a list
1. Open on GitHub
1. Select a list (via "Select" button and via context menu action)
1. Run MRVA against a user-defined list
1. Run MRVA against a top-N list
1. Run MRVA against an owner
1. Run MRVA against a database
1. Copy repo list
1. Open config file
1. Make changes via config file (ensure JSON schema is helping out)
1. Close and re-open VS Code (ensure lists are there)
1. Collapse/expand tree nodes
1. Create a new list, right click and select "Add repositories with GitHub Code Search". Enter the language 'python' and the query "UserMixin". This should show a rate limiting notification after a while but eventually populate the list with roughly 770 items.
Error cases that trigger an error notification:
1. Try to add a list with a name that already exists
1. Try to add a top-level database that already exists
1. Try to add a database in a list that already exists in the list
Error cases that show an error in the panel (and only the edit button should be visible):
1. Edit the db config file directly and save invalid JSON
1. Edit the db config file directly and save valid JSON but invalid config (e.g. add an unknown property)
1. Edit the db config file directly and save two lists with the same name
Cases where there the welcome view is shown:
1. No controller repo is set in the user's settings JSON.
* Unit tests: these live in the `tests/unit-tests/` directory
* View tests: these live in `src/view/variant-analysis/__tests__/`
* VSCode integration tests:
*`test/vscode-tests/activated-extension` tests: These are intended to cover functionality that require the full extension to be activated but don't require the CLI. This suite is not run against multiple versions of the CLI in CI.
*`test/vscode-tests/no-workspace` tests: These are intended to cover functionality around not having a workspace. The extension is not activated in these tests.
*`test/vscode-tests/minimal-workspace` tests: These are intended to cover functionality that need a workspace but don't require the full extension to be activated.
* CLI integration tests: these live in `test/vscode-tests/cli-integration`
* These tests are intended to cover functionality that is related to the integration between the CodeQL CLI and the extension. These tests are run against each supported versions of the CLI in CI.
The CLI integration tests require an instance of the CodeQL CLI to run so they will require some extra setup steps. When adding new tests to our test suite, please be mindful of whether they need to be in the cli-integration folder. If the tests don't depend on the CLI, they are better suited to being a VSCode integration test.
Any test data you're using (sample projects, config files, etc.) must go in a `test/vscode-tests/*/data` directory. When you run the tests, the test runner will copy the data directory to `out/vscode-tests/*/data`.
## Running the tests
Pre-requisites:
1. Run `npm run build`.
2. You will need to have `npm run watch` running in the background.
### 1. From the terminal
Then, from the `extensions/ql-vscode` directory, use the appropriate command to run the tests:
* Unit tests: `npm run test:unit`
* View Tests: `npm run test:view`
* VSCode integration tests: `npm run test:vscode-integration`
#### Running CLI integration tests from the terminal
The CLI integration tests require the CodeQL standard libraries in order to run so you will need to clone a local copy of the `github/codeql` repository.
1. Set the `TEST_CODEQL_PATH` environment variable: running from a terminal, you _must_ set the `TEST_CODEQL_PATH` variable to point to a checkout of the `github/codeql` repository. The appropriate CLI version will be downloaded as part of the test.
2. Run your test command:
```shell
cd extensions/ql-vscode && npm run test:cli-integration
```
### 2. From VSCode
Alternatively, you can run the tests inside of VSCode. There are several VSCode launch configurations defined that run the unit and integration tests.
You will need to run tests using a task from inside of VS Code, under the "Run and Debug" view:
* Unit tests: run the _Launch Unit Tests_ task
* View Tests: run the _Launch Unit Tests - React_ task
* VSCode integration tests: run the _Launch Unit Tests - No Workspace_ and _Launch Unit Tests - Minimal Workspace_ tasks
#### Running CLI integration tests from VSCode
The CLI integration tests require the CodeQL standard libraries in order to run so you will need to clone a local copy of the `github/codeql` repository.
1. Set the `TEST_CODEQL_PATH` environment variable: running from a terminal, you _must_ set the `TEST_CODEQL_PATH` variable to point to a checkout of the `github/codeql` repository. The appropriate CLI version will be downloaded as part of the test.
2. Set the codeql path in VSCode's launch configuration: open `launch.json` and under the _Launch Integration Tests - With CLI_ section, uncomment the `"${workspaceRoot}/../codeql"` line. If you've cloned the `github/codeql` repo to a different path, replace the value with the correct path.
3. Run the VSCode task from the "Run and Debug" view called _Launch Integration Tests - With CLI_.
## Running a single test
### 1. Running a single test from the terminal
The easiest way to run a single test is to change the `it` of the test to `it.only` and then run the test command with some additional options
to only run tests for this specific file. For example, to run the test `test/vscode-tests/cli-integration/run-queries.test.ts`:
```shell
npm run test:cli-integration -- --runTestsByPath test/vscode-tests/cli-integration/run-queries.test.ts
```
You can also use the `--testNamePattern` option to run a specific test within a file. For example, to run the test `test/vscode-tests/cli-integration/run-queries.test.ts`:
```shell
npm run test:cli-integration -- --runTestsByPath test/vscode-tests/cli-integration/run-queries.test.ts --testNamePattern "should create a QueryEvaluationInfo"
```
### 2. Running a single test from VSCode
Alternatively, you can run a single test inside VSCode. To do so, install the [Jest Runner](https://marketplace.visualstudio.com/items?itemName=firsttris.vscode-jest-runner) extension. Then,
you will have quicklinks to run a single test from within test files. To run a single unit or integration test, click the "Run" button. Debugging a single test is currently only supported
for unit tests by default. To debug integration tests, open the `.vscode/settings.json` file and uncomment the `jestrunner.debugOptions` lines. This will allow you to debug integration tests.
Please make sure to revert this change before committing; with this setting enabled, it is not possible to debug unit tests.
Without the Jest Runner extension, you can also use the "Launch Selected Unit Test (vscode-codeql)" launch configuration to run a single unit test.
## Using a mock GitHub API server
Multi-Repo Variant Analyses (MRVA) rely on the GitHub API. In order to make development and testing easy, we have functionality that allows us to intercept requests to the GitHub API and provide mock responses.
### Using a pre-recorded test scenario
To run a mock MRVA scenario, follow these steps:
1. Enable the mock GitHub API server by adding the following in your VS Code user settings (which can be found by running the `Preferences: Open User Settings (JSON)` VS Code command):
```json
"codeQL.mockGitHubApiServer":{
"enabled":true
}
```
1. Run the `CodeQL: Mock GitHub API Server: Load Scenario` command from the command pallet, and choose one of the scenarios to load.
1. Execute a normal MRVA. At this point you should see the scenario being played out, rather than an actual MRVA running.
1. Once you're done, you can stop using the mock scenario with `CodeQL: Mock GitHub API Server: Unload Scenario`
If you want to replay the same scenario you should unload and reload it so requests are replayed from the start.
### Recording a new test scenario
To record a new mock MRVA scenario, follow these steps:
1. Enable the mock GitHub API server by adding the following in your VS Code user settings (which can be found by running the `Preferences: Open User Settings (JSON)` VS Code command):
```json
"codeQL.mockGitHubApiServer":{
"enabled":true
}
```
1. Run the `CodeQL: Mock GitHub API Server: Start Scenario Recording` VS Code command from the command pallet.
1. Execute a normal MRVA.
1. Once what you wanted to record is done (e.g. the MRVA has finished), then run the `CodeQL: Mock GitHub API Server: Save Scenario` command from the command pallet.
1. The scenario should then be available for replaying.
If you want to cancel recording, run the `CodeQL: Mock GitHub API Server: Cancel Scenario Recording` command.
Once the scenario has been recorded, it's often useful to remove some of the requests to speed up the replay, particularly ones that fetch the variant analysis status. Once some of the request files have manually been removed, the [fix-scenario-file-numbering script](../extensions/ql-vscode/scripts/fix-scenario-file-numbering.ts) can be used to update the number of the files. See the script file for details on how to use.
### Scenario data location
Pre-recorded scenarios are stored in `./src/common/mock-gh-api/scenarios`. However, it's possible to configure the location, by setting the `codeQL.mockGitHubApiServer.scenariosPath` configuration property in the VS Code user settings.
The CodeQL for VS Code extension specifies the versions of VS Code that it is compatible with. VS Code will only offer to install and upgrade the extension when this version range is satisfied.
## Where is the VS Code version specified
1. Hard limit in [`package.json`](https://github.com/github/vscode-codeql/blob/606bfd7f877d9fffe4ff83b78015ab15f8840b12/extensions/ql-vscode/package.json#L16)
This is the value that VS Code understands and respects. If a user does not meet this version requirement then VS Code will not offer to install the CodeQL for VS Code extension, and if the extension is already installed then it will silently refuse to upgrade the extension.
1. Soft limit in [`extension.ts`](https://github.com/github/vscode-codeql/blob/606bfd7f877d9fffe4ff83b78015ab15f8840b12/extensions/ql-vscode/src/extension.ts#L307)
This value is used internally by the CodeQL for VS Code extension and is used to provide a warning to users without blocking them from installing or upgrading. If the extension detects that this version range is not met it will output a warning message to the user prompting them to upgrade their VS Code version to ge the latest features of CodeQL.
## When to update the VS Code version
Generally we should aim to support as wide a range of VS Code versions as we can, so unless there is a reason to do so we do not update the minimum VS Code version requirement.
Reasons for updating the minimum VS Code version include:
- A new feature is included in VS Code. We may want to ensure that it is available to use so we do not have to provide an alternative code path.
- A breaking change has happened in VS Code, and it is not possible to support both new and old versions.
Also consider what percentage of our users are using each VS Code version. This information is available in our telemetry.
## How to update the VS Code version
To provide a good experience to users, it is recommented to update the `MIN_VERSION` in `extension.ts` first and release, and then update the `vscode` version in `package.json` and release again. By stagging this update across two releases it gives users on older VS Code versions a chance to upgrade before it silently refuses to upgrade them.
## VS Code version used in tests
Our integration tests are currently pinned to use an older version of VS Code due to <https://github.com/github/vscode-codeql/issues/2402>.
This version is specified in [`jest-runner-vscode.config.base.js`](https://github.com/github/vscode-codeql/blob/d93f2b67c84e79737b0ce4bb74e31558b5f5166e/extensions/ql-vscode/test/vscode-tests/jest-runner-vscode.config.base.js#L17).
Until this is resolved this will limit us updating our minimum supported version of VS Code.
- Add new CodeQL views for managing databases and queries:
1. A queries panel that shows all queries in your workspace. It allows you to view, create, and run queries in one place.
2. A language selector, which allows you to quickly filter databases and queries by language.
For more information, see the [documentation](https://codeql.github.com/docs/codeql-for-visual-studio-code/analyzing-your-projects/#filtering-databases-and-queries-by-language).
- When adding a CodeQL database, we no longer add the database source folder to the workspace by default (since this caused bugs in single-folder workspaces). [#3047](https://github.com/github/vscode-codeql/pull/3047)
- You can manually add individual database source folders to the workspace with the "Add Database Source to Workspace" right-click command in the databases view.
- To restore the old behavior of adding all database source folders by default, set the `codeQL.addingDatabases.addDatabaseSourceToWorkspace` setting to `true`.
- Rename the `codeQL.databaseDownload.allowHttp` setting to `codeQL.addingDatabases.allowHttp`, so that database-related settings are grouped together in the Settings UI. [#3047](https://github.com/github/vscode-codeql/pull/3047) & [#3069](https://github.com/github/vscode-codeql/pull/3069)
- The "Sort by Language" action in the databases view now sorts by name within each language. [#3055](https://github.com/github/vscode-codeql/pull/3055)
## 1.9.4 - 6 November 2023
No user facing changes.
## 1.9.3 - 26 October 2023
- Sorted result set filenames now include a hash of the result set name instead of the full name. [#2955](https://github.com/github/vscode-codeql/pull/2955)
- The "Install Pack Dependencies" will now only list CodeQL packs located in the workspace. [#2960](https://github.com/github/vscode-codeql/pull/2960)
- Fix a bug where the "View Query Log" action for a query history item was not working. [#2984](https://github.com/github/vscode-codeql/pull/2984)
- Add a command to sort items in the databases view by language. [#2993](https://github.com/github/vscode-codeql/pull/2993)
- Fix not being able to open the results directory or evaluator log for a cancelled local query run. [#2996](https://github.com/github/vscode-codeql/pull/2996)
- Fix empty row in alert path when the SARIF location was empty. [#3018](https://github.com/github/vscode-codeql/pull/3018)
## 1.9.2 - 12 October 2023
- Fix a bug where the query to Find Definitions in database source files would not be cancelled appropriately. [#2885](https://github.com/github/vscode-codeql/pull/2885)
- It is now possible to show the language of query history items using the `%l` specifier in the `codeQL.queryHistory.format` setting. Note that this only works for queries run after this upgrade, and older items will show `unknown` as a language. [#2892](https://github.com/github/vscode-codeql/pull/2892)
- Increase the required version of VS Code to 1.82.0. [#2877](https://github.com/github/vscode-codeql/pull/2877)
- Fix a bug where the query server was restarted twice after configuration changes. [#2884](https://github.com/github/vscode-codeql/pull/2884).
- Add support for the `telemetry.telemetryLevel` setting. For more information, see the [telemetry documentation](https://codeql.github.com/docs/codeql-for-visual-studio-code/about-telemetry-in-codeql-for-visual-studio-code). [#2824](https://github.com/github/vscode-codeql/pull/2824).
- Add a "CodeQL: Trim Cache" command that clears the evaluation cache of a database except for predicates annotated with the `cached` keyword. Its purpose is to get accurate performance measurements when tuning the final stage of a query, like a data-flow configuration. This is equivalent to the `codeql database cleanup --mode=normal` CLI command. In contrast, the existing "CodeQL: Clear Cache" command clears the entire cache. CodeQL CLI v2.15.1 or later is required. [#2928](https://github.com/github/vscode-codeql/pull/2928)
- Fix syntax highlighting directly after import statements with instantiation arguments. [#2792](https://github.com/github/vscode-codeql/pull/2792)
- The `debug.saveBeforeStart` setting is now respected when running variant analyses. [#2950](https://github.com/github/vscode-codeql/pull/2950)
- The 'open database' button of the model editor was renamed to 'open source'. Also, it's now only available if the source archive is available as a workspace folder. [#2945](https://github.com/github/vscode-codeql/pull/2945)
## 1.9.1 - 29 September 2023
- Add warning when using a VS Code version older than 1.82.0. [#2854](https://github.com/github/vscode-codeql/pull/2854)
- Fix a bug when parsing large evaluation log summaries. [#2858](https://github.com/github/vscode-codeql/pull/2858)
- Right-align and format numbers in raw result tables. [#2864](https://github.com/github/vscode-codeql/pull/2864)
- Remove rate limit warning notifications when using Code Search to add repositories to a variant analysis list. [#2812](https://github.com/github/vscode-codeql/pull/2812)
## 1.9.0 - 19 September 2023
- Release the [CodeQL model editor](https://codeql.github.com/docs/codeql/codeql-for-visual-studio-code/using-the-codeql-model-editor) to create CodeQL model packs for Java frameworks. Open the editor using the "CodeQL: Open CodeQL Model Editor (Beta)" command. [#2823](https://github.com/github/vscode-codeql/pull/2823)
## 1.8.12 - 11 September 2023
- Fix a bug where variant analysis queries would fail for queries in the `codeql/java-queries` query pack. [#2786](https://github.com/github/vscode-codeql/pull/2786)
## 1.8.11 - 7 September 2023
- Update how variant analysis results are displayed. For queries with ["path-problem" or "problem" `@kind`](https://codeql.github.com/docs/writing-codeql-queries/metadata-for-codeql-queries/#metadata-properties), you can choose to display the results as rendered alerts or as a table of raw results. For queries with any other `@kind`, the results are displayed as a table. [#2745](https://github.com/github/vscode-codeql/pull/2745) & [#2749](https://github.com/github/vscode-codeql/pull/2749)
- When running variant analyses, don't download artifacts for repositories with no results. [#2736](https://github.com/github/vscode-codeql/pull/2736)
- Group the extension settings, so that they're easier to find in the Settings UI. [#2706](https://github.com/github/vscode-codeql/pull/2706)
## 1.8.10 - 15 August 2023
- Add a code lens to make the `CodeQL: Open Referenced File` command more discoverable. Click the "Open referenced file" prompt in a `.qlref` file to jump to the referenced `.ql` file. [#2704](https://github.com/github/vscode-codeql/pull/2704)
## 1.8.9 - 3 August 2023
- Remove "last updated" information and sorting from variant analysis results view. [#2637](https://github.com/github/vscode-codeql/pull/2637)
- Links to code on GitHub now include column numbers as well as line numbers. [#2406](https://github.com/github/vscode-codeql/pull/2406)
- No longer highlight trailing commas for jump to definition. [#2615](https://github.com/github/vscode-codeql/pull/2615)
- Fix a bug where the QHelp preview page was not being refreshed after changes to the underlying `.qhelp` file. [#2660](https://github.com/github/vscode-codeql/pull/2660)
## 1.8.8 - 17 July 2023
- Remove support for CodeQL CLI versions older than 2.9.4. [#2610](https://github.com/github/vscode-codeql/pull/2610)
- Implement syntax highlighting for the `additional` and `default` keywords. [#2609](https://github.com/github/vscode-codeql/pull/2609)
## 1.8.7 - 29 June 2023
- Show a run button on the file tab for query files, that will start a local query. This button will only show when a local database is selected in the extension. [#2544](https://github.com/github/vscode-codeql/pull/2544)
- Add a `CodeQL: Quick Evaluation Count` command to generate the count summary statistics of the results set
without spending the time to compute locations and strings. [#2475](https://github.com/github/vscode-codeql/pull/2475)
## 1.8.6 - 14 June 2023
- Add repositories to a variant analysis list with GitHub Code Search. [#2439](https://github.com/github/vscode-codeql/pull/2439) and [#2476](https://github.com/github/vscode-codeql/pull/2476)
## 1.8.5 - 6 June 2023
- Add settings `codeQL.variantAnalysis.defaultResultsFilter` and `codeQL.variantAnalysis.defaultResultsSort` for configuring how variant analysis results are filtered and sorted in the results view. The default is to show all repositories, and to sort by the number of results. [#2392](https://github.com/github/vscode-codeql/pull/2392)
- Fix bug to ensure error messages have complete stack trace in message logs. [#2425](https://github.com/github/vscode-codeql/pull/2425)
- Fix bug where the `CodeQL: Compare Query` command did not work for comparing quick-eval queries. [#2422](https://github.com/github/vscode-codeql/pull/2422)
- Update text of copy and export buttons in variant analysis results view to clarify that they only copy/export the selected/filtered results. [#2427](https://github.com/github/vscode-codeql/pull/2427)
- Add warning when using unsupported CodeQL CLI version. [#2428](https://github.com/github/vscode-codeql/pull/2428)
- Retry variant analysis results download if connection times out. [#2440](https://github.com/github/vscode-codeql/pull/2440)
## 1.8.4 - 3 May 2023
- Avoid repeated error messages when unable to monitor a variant analysis. [#2396](https://github.com/github/vscode-codeql/pull/2396)
- Fix bug where a variant analysis didn't display the `#select` results set correctly when the [query metadata](https://codeql.github.com/docs/writing-codeql-queries/about-codeql-queries/#query-metadata) didn't exactly match the query results. [#2395](https://github.com/github/vscode-codeql/pull/2395)
- On the variant analysis results page, show the count of successful analyses instead of completed analyses, and indicate the reason why analyses were not successful. [#2349](https://github.com/github/vscode-codeql/pull/2349)
- Fix bug where the "CodeQL: Set Current Database" command didn't always select the database. [#2384](https://github.com/github/vscode-codeql/pull/2384)
## 1.8.3 - 26 April 2023
- Added ability to filter repositories for a variant analysis to only those that have results [#2343](https://github.com/github/vscode-codeql/pull/2343)
- Add new configuration option to allow downloading databases from http, non-secure servers. [#2332](https://github.com/github/vscode-codeql/pull/2332)
- Remove title actions from the query history panel that depended on history items being selected. [#2350](https://github.com/github/vscode-codeql/pull/2350)
## 1.8.2 - 12 April 2023
- Fix bug where users could end up with the managed CodeQL CLI getting uninstalled during upgrades and not reinstalled. [#2294](https://github.com/github/vscode-codeql/pull/2294)
- Fix bug that was causing code flows to not get updated when switching between results. [#2288](https://github.com/github/vscode-codeql/pull/2288)
- Restart the CodeQL language server whenever the _CodeQL: Restart Query Server_ command is invoked. This avoids bugs where the CLI version changes to support new language features, but the language server is not updated. [#2238](https://github.com/github/vscode-codeql/pull/2238)
- Avoid requiring a manual restart of the query server when the [external CLI config file](https://docs.github.com/en/code-security/codeql-cli/using-the-codeql-cli/specifying-command-options-in-a-codeql-configuration-file#using-a-codeql-configuration-file) changes. [#2289](https://github.com/github/vscode-codeql/pull/2289)
## 1.8.1 - 23 March 2023
- Show data flow paths of a variant analysis in a new tab. [#2172](https://github.com/github/vscode-codeql/pull/2172) & [#2182](https://github.com/github/vscode-codeql/pull/2182)
- Show labels of entities in exported CSV results. [#2170](https://github.com/github/vscode-codeql/pull/2170)
## 1.8.0 - 9 March 2023
- Send telemetry about unhandled errors happening within the extension. [#2125](https://github.com/github/vscode-codeql/pull/2125)
- Enable collection of telemetry concerning interactions with UI elements, including buttons, links, and other inputs. [#2114](https://github.com/github/vscode-codeql/pull/2114)
- Prevent the automatic installation of CodeQL CLI version 2.12.3 to avoid a bug in the language server. CodeQL CLI 2.12.2 will be used instead. [#2126](https://github.com/github/vscode-codeql/pull/2126)
## 1.7.10 - 23 February 2023
- Fix bug that was causing unwanted error notifications.
## 1.7.9 - 20 February 2023
No user facing changes.
## 1.7.8 - 2 February 2023
- Renamed command "CodeQL: Run Query" to "CodeQL: Run Query on Selected Database". [#1962](https://github.com/github/vscode-codeql/pull/1962)
- Remove support for CodeQL CLI versions older than 2.7.6. [#1788](https://github.com/github/vscode-codeql/pull/1788)
## 1.7.7 - 13 December 2022
- Increase the required version of VS Code to 1.67.0. [#1662](https://github.com/github/vscode-codeql/pull/1662)
## 1.7.6 - 21 November 2022
- Warn users when their VS Code version is too old to support all features in the vscode-codeql extension. [#1674](https://github.com/github/vscode-codeql/pull/1674)
## 1.7.5 - 8 November 2022
- Fix a bug where the AST Viewer was not working unless the associated CodeQL library pack is in the workspace. [#1735](https://github.com/github/vscode-codeql/pull/1735)
## 1.7.4 - 29 October 2022
No user facing changes.
## 1.7.3 - 28 October 2022
- Fix a bug where databases may be lost if VS Code is restarted while the extension is being started up. [#1638](https://github.com/github/vscode-codeql/pull/1638)
- Add commands for navigating up, down, left, or right in the result viewer. Previously there were only commands for moving up and down the currently-selected path. We suggest binding keyboard shortcuts to these commands, for navigating the result viewer using the keyboard. [#1568](https://github.com/github/vscode-codeql/pull/1568)
## 1.7.2 - 14 October 2022
- Fix a bug where results created in older versions were thought to be unsuccessful. [#1605](https://github.com/github/vscode-codeql/pull/1605)
## 1.7.1 - 12 October 2022
- Fix a bug where it was not possible to add a database folder if the folder name starts with `db-`. [#1565](https://github.com/github/vscode-codeql/pull/1565)
- Ensure the results view opens in an editor column beside the currently active editor. [#1557](https://github.com/github/vscode-codeql/pull/1557)
## 1.7.0 - 20 September 2022
- Remove ability to download databases from LGTM. [#1467](https://github.com/github/vscode-codeql/pull/1467)
- Removed the ability to manually upgrade databases from the context menu on databases. Databases are non-destructively upgraded automatically so for most users this was not needed. For advanced users this is still available in the Command Palette. [#1501](https://github.com/github/vscode-codeql/pull/1501)
- Remove the ability to manually upgrade databases from the context menu on databases. Databases are non-destructively upgraded automatically so for most users this was not needed. For advanced users this is still available in the Command Palette. [#1501](https://github.com/github/vscode-codeql/pull/1501)
- Always restart the query server after a manual database upgrade. This avoids a bug in the query server where an invalid dbscheme was being retained in memory after an upgrade. [#1519](https://github.com/github/vscode-codeql/pull/1519)
@@ -16,17 +16,19 @@ For information about other configurations, see the separate [CodeQL help](https
### Quick start: Installing and configuring the extension
1. [Install the extension](#installing-the-extension).
1. [Install the extension](#installing-the-extension).
*Note: vscode-codeql installs the following dependencies for required functionality: [Test Adapter Converter](https://marketplace.visualstudio.com/items?itemName=ms-vscode.test-adapter-converter), [Test Explorer UI](https://marketplace.visualstudio.com/items?itemName=hbenl.vscode-test-explorer).*
1. [Check access to the CodeQL CLI](#checking-access-to-the-codeql-cli).
1. [Clone the CodeQL starter workspace](#cloning-the-codeql-starter-workspace).
### Quick start: Using CodeQL
1. [Import a database from LGTM](#importing-a-database-from-lgtm).
1. [Import a database from GitHub](#importing-a-database-from-github).
1. [Run a query](#running-a-query).
---
<!-- markdownlint-disable-next-line MD024 -->
## Quick start: Installing and configuring the extension
### Installing the extension
@@ -69,35 +71,41 @@ in the starter workspace directory.
If you're using your own clone of the CodeQL standard libraries, you can do a `git pull` from where you have the libraries checked out.
<!-- markdownlint-disable-next-line MD024 -->
## Quick start: Using CodeQL
You can find all the commands contributed by the extension in the Command Palette (**Ctrl+Shift+P** or **Cmd+Shift+P**) by typing `CodeQL`, many of them are also accessible through the interface, and via keyboard shortcuts.
### Importing a database from LGTM
### Importing a database from GitHub
While you can use the [CodeQL CLI to create your own databases](https://codeql.github.com/docs/codeql-cli/creating-codeql-databases/), the simplest way to start is by downloading a database from LGTM.com.
While you can use the [CodeQL CLI to create your own databases](https://codeql.github.com/docs/codeql-cli/creating-codeql-databases/), the simplest way to start is by downloading a database from GitHub.com.
1.Open [LGTM.com](https://lgtm.com/#explore) in your browser.
1.Search for a project you're interested in, for example [Apache Kafka](https://lgtm.com/projects/g/apache/kafka).
1.Copy the link to that project, for example `https://lgtm.com/projects/g/apache/kafka`.
1. In VS Code, open the Command Palette and choose the **CodeQL: Download Database from LGTM** command.
1.Find a project that you're interested in on GitHub.com, for example [Apache Kafka](https://github.com/apache/kafka).
1.Copy the link to that project, for example `https://github.com/apache/kafka`.
1.In VS Code, open the Command Palette and choose the **CodeQL: Download Database from GitHub** command.
1. Paste the link you copied earlier.
1. Select the language for the database you want to download (only required if the project has databases for multiple languages).
1. Once the CodeQL database has been imported, it is displayed in the Databases view.
For more information, see [Choosing a database](https://codeql.github.com/docs/codeql-for-visual-studio-code/analyzing-your-projects/#choosing-a-database) on codeql.github.com.
### Running a query
The instructions below assume that you're using the CodeQL starter workspace, or that you've added the CodeQL libraries and queries repository to your workspace.
1. Expand the `ql` folder and locate a query to run. The standard queries are grouped by target language and then type, for example: `ql/java/ql/src/Likely Bugs`.
1. Open a query (`.ql`) file.
1. Right-click in the query window and select **CodeQL: Run Query**. Alternatively, open the Command Palette (**Ctrl+Shift+P** or **Cmd+Shift+P**), type `Run Query`, then select **CodeQL: Run Query**.
1. Right-click in the query window and select **CodeQL: Run Query on Selected Database**. Alternatively, open the Command Palette (**Ctrl+Shift+P** or **Cmd+Shift+P**), type `Run Query`, then select **CodeQL: Run Query on Selected Database**.
The CodeQL extension runs the query on the current database using the CLI and reports progress in the bottom right corner of the application.
When the results are ready, they're displayed in the CodeQL Query Results view. Use the dropdown menu to choose between different forms of result output.
If there are any problems running a query, a notification is displayed in the bottom right corner of the application. In addition to the error message, the notification includes details of how to fix the problem.
### Keyboard navigation
If you wish to navigate the query results from your keyboard, you can bind shortcuts to the **CodeQL: Navigate Up/Down/Left/Right in Result Viewer** commands.
## What next?
For more information about the CodeQL extension, [see the documentation](https://codeql.github.com/docs/codeql-for-visual-studio-code/). Otherwise, you could:
// Automatically clear mock calls, instances, contexts and results before every test
// clearMocks: true,
// Indicates whether the coverage information should be collected while executing the test
// collectCoverage: false,
// An array of glob patterns indicating a set of files for which coverage information should be collected
// collectCoverageFrom: undefined,
// The directory where Jest should output its coverage files
// coverageDirectory: undefined,
// An array of regexp pattern strings used to skip coverage collection
// coveragePathIgnorePatterns: [
// "/node_modules/"
// ],
// Indicates which provider should be used to instrument code for coverage
coverageProvider:'v8',
// A list of reporter names that Jest uses when writing coverage reports
// coverageReporters: [
// "json",
// "text",
// "lcov",
// "clover"
// ],
// An object that configures minimum threshold enforcement for coverage results
// coverageThreshold: undefined,
// A path to a custom dependency extractor
// dependencyExtractor: undefined,
// Make calling deprecated APIs throw helpful error messages
// errorOnDeprecated: false,
// The default configuration for fake timers
// fakeTimers: {
// "enableGlobally": false
// },
// Force coverage collection from ignored files using an array of glob patterns
// forceCoverageMatch: [],
// A path to a module which exports an async function that is triggered once before all test suites
// globalSetup: undefined,
// A path to a module which exports an async function that is triggered once after all test suites
// globalTeardown: undefined,
// A set of global variables that need to be available in all test environments
// globals: {},
// The maximum amount of workers used to run your tests. Can be specified as % or a number. E.g. maxWorkers: 10% will use 10% of your CPU amount + 1 as the maximum worker number. maxWorkers: 2 will use a maximum of 2 workers.
// maxWorkers: "50%",
// An array of directory names to be searched recursively up from the requiring module's location
<pathd="M 35.300905,316.97546 H 93.308718 V 116.76062 L 30.203249,129.41687 V 97.07312 L 92.957155,84.41687 h 35.507815 v 232.55859 h 58.00781 v 29.88282 H 35.300905 Z"fill="#C5C5C5"/>
<pathd="M 35.300905,316.97546 H 93.308718 V 116.76062 L 30.203249,129.41687 V 97.07312 L 92.957155,84.41687 h 35.507815 v 232.55859 h 58.00781 v 29.88282 H 35.300905 Z"/>
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.