Compare commits

...

55 Commits

Author SHA1 Message Date
Dave Bartolomeo
ac7211c117 Merge pull request #1475 from dbartol/dbartol/extension-release/work
Some checks failed
Release / Release (push) Has been cancelled
Release / Publish to VS Code Marketplace (push) Has been cancelled
Release / Publish to Open VSX Registry (push) Has been cancelled
Prepare for release 1.6.11
2022-08-25 16:36:44 -04:00
Dave Bartolomeo
d1d13fbd2e Update changelog for release 2022-08-25 13:11:50 -04:00
Dave Bartolomeo
f99166d26c Update Node version to match vscode 2022-08-25 13:01:35 -04:00
dependabot[bot]
9cd6f9a768 Bump d3 and @types/d3 in /extensions/ql-vscode (#1461)
Bumps [d3](https://github.com/d3/d3) and [@types/d3](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/d3). These dependencies needed to be updated together.

Updates `d3` from 6.7.0 to 7.6.1
- [Release notes](https://github.com/d3/d3/releases)
- [Changelog](https://github.com/d3/d3/blob/main/CHANGES.md)
- [Commits](https://github.com/d3/d3/compare/v6.7.0...v7.6.1)

Updates `@types/d3` from 6.7.5 to 7.4.0
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/d3)

---
updated-dependencies:
- dependency-name: d3
  dependency-type: direct:production
  update-type: version-update:semver-major
- dependency-name: "@types/d3"
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-24 08:13:04 -07:00
Koen Vlaswinkel
4dd16f4611 Merge pull request #1472 from github/koesie10/fix-data-not-loaded-in-mrva-results
Fix data not being loaded in MRVA results panel
2022-08-24 15:14:26 +02:00
Koen Vlaswinkel
2113d08545 Fix data not being loaded in MRVA results panel
When the MRVA results panel is closed (so the panel gets disposed) and
opened again, it would not load the MRVA data (such as whether a query
has already been downloaded). This fixes it by also resetting the
internal state of whether the panel is loaded when the panel is
disposed.
2022-08-24 13:20:01 +02:00
Koen Vlaswinkel
5b5ef26864 Merge pull request #1471 from github/revert-1466-koesie10/add-github-download-button
Revert "Remove canary requirement for GitHub database download"
2022-08-24 12:12:44 +02:00
Koen Vlaswinkel
c5a6e64df8 Revert "Remove canary requirement for GitHub database download" 2022-08-24 11:51:44 +02:00
Charis Kyriakou
178d626062 Upgrade webview-ui-toolkit version with long link fix (#1469) 2022-08-24 10:17:41 +01:00
Dave Bartolomeo
d1d48b3506 Merge pull request #1468 from dbartol/dbartol/pmod-highlight/work
Add `implements` and `signature` to syntax highlighting
2022-08-23 17:02:03 -04:00
Dave Bartolomeo
9180d1d9fc Fix comment 2022-08-23 16:42:57 -04:00
Dave Bartolomeo
674c5ecbff Add implements and signature to syntax highlighting 2022-08-23 14:56:47 -04:00
Koen Vlaswinkel
edcac6925c Merge pull request #1466 from github/koesie10/add-github-download-button
Remove canary requirement for GitHub database download
2022-08-23 16:24:12 +02:00
Koen Vlaswinkel
c10500c5ea Update CHANGELOG 2022-08-23 14:58:36 +02:00
Koen Vlaswinkel
0832850009 Remove canary requirement for GitHub database download 2022-08-23 14:45:48 +02:00
Alexander Eyers-Taylor
b352830674 Improve startup time (#1465)
* ArchiveFileSystem: Only parse zips once

* CLIServer: Only get version once
2022-08-23 11:05:10 +01:00
Andrew Eisenberg
e913165249 Merge pull request #1463 from github/aeisenberg/bump-timeout 2022-08-17 17:21:31 -07:00
Andrew Eisenberg
ef94bb3d38 Bump telemetry test timeout
This test is failing occasionally on our CI system. Let's see if this
change prevents the failures.
2022-08-17 15:47:30 -07:00
Shati Patel
4d6076c4ea Escape HTML characters when rendering MRVA results as markdown (#1462) 2022-08-17 10:52:36 +01:00
Dave Bartolomeo
43650fde00 Merge pull request #1454 from github/dbartol/join-order
Report suspicious join orders
2022-08-15 14:13:35 -04:00
Angela P Wen
f2c72a67f6 Bump CLI version to 2.10.3 for integration tests (#1460) 2022-08-15 16:41:26 +00:00
Dave Bartolomeo
2b1f3227ce Fix computation of result sizes in IN_LAYER events 2022-08-12 17:00:26 -04:00
Dave Bartolomeo
841f1d3310 Replace console logging to route through problem reporter 2022-08-12 16:43:21 -04:00
Dave Bartolomeo
99756ae63b Fix PR feedback 2022-08-12 16:25:52 -04:00
Dave Bartolomeo
9a2bea39e6 Better handling of missing log data 2022-08-12 16:14:24 -04:00
Dave Bartolomeo
1aab49c719 Specify return type 2022-08-12 16:01:58 -04:00
Dave Bartolomeo
cf925c256f Update extensions/ql-vscode/src/log-insights/log-scanner-service.ts
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
2022-08-12 15:50:28 -04:00
Dave Bartolomeo
8383a76e43 Merge branch 'dbartol/join-order' of https://github.com/github/vscode-codeql into dbartol/join-order 2022-08-12 15:41:52 -04:00
Dave Bartolomeo
c6d792f41e Fix PR feedback
Better handling of malformed RA
2022-08-12 15:39:32 -04:00
Dave Bartolomeo
277192e7d3 Update extensions/ql-vscode/src/log-insights/join-order.ts
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
2022-08-12 14:59:20 -04:00
Dave Bartolomeo
85988ecf34 Update extensions/ql-vscode/src/log-insights/join-order.ts
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
2022-08-12 14:50:10 -04:00
Dave Bartolomeo
49d12674b7 Cache regexprs 2022-08-12 14:47:50 -04:00
Dave Bartolomeo
beeb19dc05 Fix typo 2022-08-12 12:58:46 -04:00
Dave Bartolomeo
de88d27057 Update extensions/ql-vscode/src/log-insights/join-order.ts
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
2022-08-12 12:49:29 -04:00
Dave Bartolomeo
eb2d00e999 Update extensions/ql-vscode/src/log-insights/join-order.ts
Co-authored-by: Andrew Eisenberg <aeisenberg@github.com>
2022-08-12 12:48:28 -04:00
Dave Bartolomeo
d58fb54928 Better formatting of metrics 2022-08-11 13:51:11 -04:00
Dave Bartolomeo
fdc209ca08 Test for log scanning 2022-08-10 18:07:59 -04:00
Dave Bartolomeo
28092f2b86 Move more of log scanning into pure code 2022-08-10 17:33:55 -04:00
Dave Bartolomeo
8970ad78ae Remove code added via bad merge 2022-08-10 13:51:08 -04:00
Dave Bartolomeo
e7a0c58940 Fix CodeQL alert 2022-08-10 13:18:00 -04:00
Dave Bartolomeo
02270aaeee Fix lint 2022-08-10 13:13:59 -04:00
Dave Bartolomeo
51fb03b4b1 Fix tests to match code changes 2022-08-10 13:11:34 -04:00
Dave Bartolomeo
838a2b71ac Scan logs on change in current query 2022-08-09 18:02:27 -04:00
Charis Kyriakou
f01c421d42 Merge pull request #1458 from github/version/bump-to-v1.6.11
Bump version to v1.6.11
2022-08-09 16:59:14 +01:00
charisk
561bc6f53c Bump version to v1.6.11 2022-08-09 15:21:26 +00:00
Dave Bartolomeo
3c57597a19 Share code for splitting records from pseudo-JSONL 2022-08-05 17:36:45 -04:00
Dave Bartolomeo
e8d5029912 Merge remote-tracking branch 'origin/main' into dbartol/join-order-temp 2022-08-05 17:34:52 -04:00
Dave Bartolomeo
cb514f5c78 Pre-cleanup to avoid merge conflicts 2022-08-05 14:59:40 -04:00
Dave Bartolomeo
57bb8cee41 Update regexes to match new summary text 2022-08-04 16:17:27 -04:00
Dave Bartolomeo
1219ef4a8c Remove unnecessary command 2022-08-04 16:17:09 -04:00
Dave Bartolomeo
677a0f7940 Fix lint 2022-08-04 14:42:47 -04:00
Dave Bartolomeo
cff235c420 Auto-format 2022-05-03 18:14:03 -04:00
Dave Bartolomeo
1089a052ec Initial implementation of join order metric scanning 2022-05-03 13:20:30 -04:00
Dave Bartolomeo
1d195cb347 Merge remote-tracking branch 'origin/main' into dbartol/join-order 2022-04-27 17:50:50 -04:00
Dave Bartolomeo
8d8ed28aea Add necessary dependencies 2022-04-27 17:50:46 -04:00
35 changed files with 16401 additions and 2015 deletions

View File

@@ -22,7 +22,7 @@ jobs:
- uses: actions/setup-node@v1
with:
node-version: '16.13.0'
node-version: '16.14.0'
- name: Install dependencies
working-directory: extensions/ql-vscode
@@ -82,7 +82,7 @@ jobs:
- uses: actions/setup-node@v1
with:
node-version: '16.13.0'
node-version: '16.14.0'
- name: Install dependencies
working-directory: extensions/ql-vscode
@@ -139,7 +139,7 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
version: ['v2.6.3', 'v2.7.6', 'v2.8.5', 'v2.9.4', 'v2.10.2', 'nightly']
version: ['v2.6.3', 'v2.7.6', 'v2.8.5', 'v2.9.4', 'v2.10.3', 'nightly']
env:
CLI_VERSION: ${{ matrix.version }}
NIGHTLY_URL: ${{ needs.find-nightly.outputs.url }}
@@ -151,7 +151,7 @@ jobs:
- uses: actions/setup-node@v1
with:
node-version: '16.13.0'
node-version: '16.14.0'
- name: Install dependencies
working-directory: extensions/ql-vscode

View File

@@ -22,7 +22,7 @@ jobs:
- uses: actions/setup-node@v1
with:
node-version: '16.13.0'
node-version: '16.14.0'
- name: Install dependencies
run: |

View File

@@ -1 +1 @@
v16.13.0
v16.14.0

View File

@@ -1,5 +1,9 @@
# CodeQL for Visual Studio Code: Changelog
## 1.6.11 - 25 August 2022
No user facing changes.
## 1.6.10 - 9 August 2022
No user facing changes.

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@
"description": "CodeQL for Visual Studio Code",
"author": "GitHub",
"private": true,
"version": "1.6.10",
"version": "1.6.11",
"publisher": "GitHub",
"license": "MIT",
"icon": "media/VS-marketplace-CodeQL-icon.png",
@@ -1204,13 +1204,14 @@
"@primer/octicons-react": "^16.3.0",
"@primer/react": "^35.0.0",
"@vscode/codicons": "^0.0.31",
"@vscode/webview-ui-toolkit": "^1.0.0",
"@vscode/webview-ui-toolkit": "^1.0.1",
"child-process-promise": "^2.2.1",
"classnames": "~2.2.6",
"d3": "^6.3.1",
"d3": "^7.6.1",
"d3-graphviz": "^2.6.1",
"fs-extra": "^10.0.1",
"glob-promise": "^4.2.2",
"immutable": "^4.0.0",
"js-yaml": "^4.1.0",
"minimist": "~1.2.6",
"nanoid": "^3.2.0",
@@ -1241,7 +1242,7 @@
"@types/chai-as-promised": "~7.1.2",
"@types/child-process-promise": "^2.2.1",
"@types/classnames": "~2.2.9",
"@types/d3": "^6.2.0",
"@types/d3": "^7.4.0",
"@types/d3-graphviz": "^2.6.6",
"@types/del": "^4.0.0",
"@types/fs-extra": "^9.0.6",

View File

@@ -167,21 +167,26 @@ type Archive = {
dirMap: DirectoryHierarchyMap;
};
async function parse_zip(zipPath: string): Promise<Archive> {
if (!await fs.pathExists(zipPath))
throw vscode.FileSystemError.FileNotFound(zipPath);
const archive: Archive = { unzipped: await unzipper.Open.file(zipPath), dirMap: new Map };
archive.unzipped.files.forEach(f => { ensureFile(archive.dirMap, path.resolve('/', f.path)); });
return archive;
}
export class ArchiveFileSystemProvider implements vscode.FileSystemProvider {
private readOnlyError = vscode.FileSystemError.NoPermissions('write operation attempted, but source archive filesystem is readonly');
private archives: Map<string, Archive> = new Map;
private archives: Map<string, Promise<Archive>> = new Map;
private async getArchive(zipPath: string): Promise<Archive> {
if (!this.archives.has(zipPath)) {
if (!await fs.pathExists(zipPath))
throw vscode.FileSystemError.FileNotFound(zipPath);
const archive: Archive = { unzipped: await unzipper.Open.file(zipPath), dirMap: new Map };
archive.unzipped.files.forEach(f => { ensureFile(archive.dirMap, path.resolve('/', f.path)); });
this.archives.set(zipPath, archive);
this.archives.set(zipPath, parse_zip(zipPath));
}
return this.archives.get(zipPath)!;
return await this.archives.get(zipPath)!;
}
root = new Directory('');
// metadata

View File

@@ -168,7 +168,7 @@ export class CodeQLCliServer implements Disposable {
nullBuffer: Buffer;
/** Version of current cli, lazily computed by the `getVersion()` method */
private _version: SemVer | undefined;
private _version: Promise<SemVer> | undefined;
/**
* The languages supported by the current version of the CLI, computed by `getSupportedLanguages()`.
@@ -695,7 +695,7 @@ export class CodeQLCliServer implements Disposable {
* @param inputPath The path of an evaluation event log.
* @param outputPath The path to write a JSON summary of it to.
*/
async generateJsonLogSummary(
async generateJsonLogSummary(
inputPath: string,
outputPath: string,
): Promise<string> {
@@ -985,13 +985,13 @@ export class CodeQLCliServer implements Disposable {
public async getVersion() {
if (!this._version) {
this._version = await this.refreshVersion();
this._version = this.refreshVersion();
// this._version is only undefined upon config change, so we reset CLI-based context key only when necessary.
await commands.executeCommand(
'setContext', 'codeql.supportsEvalLog', await this.cliConstraints.supportsPerQueryEvalLog()
);
}
return this._version;
return await this._version;
}
private async refreshVersion() {

View File

@@ -100,6 +100,8 @@ import { exportRemoteQueryResults } from './remote-queries/export-results';
import { RemoteQuery } from './remote-queries/remote-query';
import { EvalLogViewer } from './eval-log-viewer';
import { SummaryLanguageSupport } from './log-insights/summary-language-support';
import { JoinOrderScannerProvider } from './log-insights/join-order';
import { LogScannerService } from './log-insights/log-scanner-service';
/**
* extension.ts
@@ -483,6 +485,11 @@ async function activateWithInstalledDistribution(
ctx.subscriptions.push(qhm);
void logger.log('Initializing evaluation log scanners.');
const logScannerService = new LogScannerService(qhm);
ctx.subscriptions.push(logScannerService);
ctx.subscriptions.push(logScannerService.scanners.registerLogScannerProvider(new JoinOrderScannerProvider()));
void logger.log('Reading query history');
await qhm.readQueryHistory();
@@ -556,7 +563,7 @@ async function activateWithInstalledDistribution(
undefined,
item,
);
item.completeThisQuery(completedQueryInfo);
qhm.completeQuery(item, completedQueryInfo);
await showResultsForCompletedQuery(item as CompletedLocalQueryInfo, WebviewReveal.NotForced);
// Note we must update the query history view after showing results as the
// display and sorting might depend on the number of results

View File

@@ -0,0 +1,460 @@
import * as I from 'immutable';
import { EvaluationLogProblemReporter, EvaluationLogScanner, EvaluationLogScannerProvider } from './log-scanner';
import { InLayer, ComputeRecursive, SummaryEvent, PipelineRun, ComputeSimple } from './log-summary';
const DEFAULT_WARNING_THRESHOLD = 50;
/**
* Like `max`, but returns 0 if no meaningful maximum can be computed.
*/
function safeMax(it: Iterable<number>) {
const m = Math.max(...it);
return Number.isFinite(m) ? m : 0;
}
/**
* Compute a key for the maps that that is sent to report generation.
* Should only be used on events that are known to define queryCausingWork.
*/
function makeKey(
queryCausingWork: string | undefined,
predicate: string,
suffix = ''
): string {
if (queryCausingWork === undefined) {
throw new Error(
'queryCausingWork was not defined on an event we expected it to be defined for!'
);
}
return `${queryCausingWork}:${predicate}${suffix ? ' ' + suffix : ''}`;
}
const DEPENDENT_PREDICATES_REGEXP = (() => {
const regexps = [
// SCAN id
String.raw`SCAN\s+([0-9a-zA-Z:#_]+)\s`,
// JOIN id WITH id
String.raw`JOIN\s+([0-9a-zA-Z:#_]+)\s+WITH\s+([0-9a-zA-Z:#_]+)\s`,
// AGGREGATE id, id
String.raw`AGGREGATE\s+([0-9a-zA-Z:#_]+)\s*,\s+([0-9a-zA-Z:#_]+)`,
// id AND NOT id
String.raw`([0-9a-zA-Z:#_]+)\s+AND\s+NOT\s+([0-9a-zA-Z:#_]+)`,
// INVOKE HIGHER-ORDER RELATION rel ON <id, ..., id>
String.raw`INVOKE\s+HIGHER-ORDER\s+RELATION\s[^\s]+\sON\s+<([0-9a-zA-Z:#_<>]+)((?:,[0-9a-zA-Z:#_<>]+)*)>`,
// SELECT id
String.raw`SELECT\s+([0-9a-zA-Z:#_]+)`
];
return new RegExp(
`${String.raw`\{[0-9]+\}\s+[0-9a-zA-Z]+\s=\s(?:` + regexps.join('|')})`
);
})();
function getDependentPredicates(operations: string[]): I.List<string> {
return I.List(operations).flatMap(operation => {
const matches = DEPENDENT_PREDICATES_REGEXP.exec(operation.trim());
if (matches !== null) {
return I.List(matches)
.rest() // Skip the first group as it's just the entire string
.filter(x => !!x && !x.match('r[0-9]+|PRIMITIVE')) // Only keep the references to predicates.
.flatMap(x => x.split(',')) // Group 2 in the INVOKE HIGHER_ORDER RELATION case is a comma-separated list of identifiers.
.filter(x => !!x); // Remove empty strings
} else {
return I.List();
}
});
}
function getMainHash(event: InLayer | ComputeRecursive): string {
switch (event.evaluationStrategy) {
case 'IN_LAYER':
return event.mainHash;
case 'COMPUTE_RECURSIVE':
return event.raHash;
}
}
/**
* Sum arrays a and b element-wise. The shorter array is padded with 0s if the arrays are not the same length.
*/
function pointwiseSum(a: Int32Array, b: Int32Array, problemReporter: EvaluationLogProblemReporter): Int32Array {
function reportIfInconsistent(ai: number, bi: number) {
if (ai === -1 && bi !== -1) {
problemReporter.log(
`Operation was not evaluated in the first pipeline, but it was evaluated in the accumulated pipeline (with tuple count ${bi}).`
);
}
if (ai !== -1 && bi === -1) {
problemReporter.log(
`Operation was evaluated in the first pipeline (with tuple count ${ai}), but it was not evaluated in the accumulated pipeline.`
);
}
}
const length = Math.max(a.length, b.length);
const result = new Int32Array(length);
for (let i = 0; i < length; i++) {
const ai = a[i] || 0;
const bi = b[i] || 0;
// -1 is used to represent the absence of a tuple count for a line in the pretty-printed RA (e.g. an empty line), so we ignore those.
if (i < a.length && i < b.length && (ai === -1 || bi === -1)) {
result[i] = -1;
reportIfInconsistent(ai, bi);
} else {
result[i] = ai + bi;
}
}
return result;
}
function pushValue<K, V>(m: Map<K, V[]>, k: K, v: V) {
if (!m.has(k)) {
m.set(k, []);
}
m.get(k)!.push(v);
return m;
}
function computeJoinOrderBadness(
maxTupleCount: number,
maxDependentPredicateSize: number,
resultSize: number
): number {
return maxTupleCount / Math.max(maxDependentPredicateSize, resultSize);
}
/**
* A bucket contains the pointwise sum of the tuple counts, result sizes and dependent predicate sizes
* For each (predicate, order) in an SCC, we will compute a bucket.
*/
interface Bucket {
tupleCounts: Int32Array;
resultSize: number;
dependentPredicateSizes: I.Map<string, number>;
}
class JoinOrderScanner implements EvaluationLogScanner {
// Map a predicate hash to its result size
private readonly predicateSizes = new Map<string, number>();
private readonly layerEvents = new Map<string, (ComputeRecursive | InLayer)[]>();
// Map a key of the form 'query-with-demand : predicate name' to its badness input.
private readonly maxTupleCountMap = new Map<string, number[]>();
private readonly resultSizeMap = new Map<string, number[]>();
private readonly maxDependentPredicateSizeMap = new Map<string, number[]>();
private readonly joinOrderMetricMap = new Map<string, number>();
constructor(
private readonly problemReporter: EvaluationLogProblemReporter,
private readonly warningThreshold: number) {
}
public onEvent(event: SummaryEvent): void {
if (
event.completionType !== undefined &&
event.completionType !== 'SUCCESS'
) {
return; // Skip any evaluation that wasn't successful
}
this.recordPredicateSizes(event);
this.computeBadnessMetric(event);
}
public onDone(): void {
void this;
}
private recordPredicateSizes(event: SummaryEvent): void {
switch (event.evaluationStrategy) {
case 'EXTENSIONAL':
case 'COMPUTED_EXTENSIONAL':
case 'COMPUTE_SIMPLE':
case 'CACHACA':
case 'CACHE_HIT': {
this.predicateSizes.set(event.raHash, event.resultSize);
break;
}
case 'SENTINEL_EMPTY': {
this.predicateSizes.set(event.raHash, 0);
break;
}
case 'COMPUTE_RECURSIVE':
case 'IN_LAYER': {
this.predicateSizes.set(event.raHash, event.resultSize);
// layerEvents are indexed by the mainHash.
const hash = getMainHash(event);
if (!this.layerEvents.has(hash)) {
this.layerEvents.set(hash, []);
}
this.layerEvents.get(hash)!.push(event);
break;
}
}
}
private reportProblemIfNecessary(event: SummaryEvent, iteration: number, metric: number): void {
if (metric >= this.warningThreshold) {
this.problemReporter.reportProblem(event.predicateName, event.raHash, iteration,
`Relation '${event.predicateName}' has an inefficient join order. Its join order metric is ${metric.toFixed(2)}, which is larger than the threshold of ${this.warningThreshold.toFixed(2)}.`);
}
}
private computeBadnessMetric(event: SummaryEvent): void {
if (
event.completionType !== undefined &&
event.completionType !== 'SUCCESS'
) {
return; // Skip any evaluation that wasn't successful
}
switch (event.evaluationStrategy) {
case 'COMPUTE_SIMPLE': {
if (!event.pipelineRuns) {
// skip if the optional pipelineRuns field is not present.
break;
}
// Compute the badness metric for a non-recursive predicate. The metric in this case is defined as:
// badness = (max tuple count in the pipeline) / (largest predicate this pipeline depends on)
const key = makeKey(event.queryCausingWork, event.predicateName);
const resultSize = event.resultSize;
// There is only one entry in `pipelineRuns` if it's a non-recursive predicate.
const { maxTupleCount, maxDependentPredicateSize } =
this.badnessInputsForNonRecursiveDelta(event.pipelineRuns[0], event);
if (maxDependentPredicateSize > 0) {
pushValue(this.maxTupleCountMap, key, maxTupleCount);
pushValue(this.resultSizeMap, key, resultSize);
pushValue(
this.maxDependentPredicateSizeMap,
key,
maxDependentPredicateSize
);
const metric = computeJoinOrderBadness(maxTupleCount, maxDependentPredicateSize, resultSize!);
this.joinOrderMetricMap.set(key, metric);
this.reportProblemIfNecessary(event, 0, metric);
}
break;
}
case 'COMPUTE_RECURSIVE': {
// Compute the badness metric for a recursive predicate for each ordering.
const sccMetricInput = this.badnessInputsForRecursiveDelta(event);
// Loop through each predicate in the SCC
sccMetricInput.forEach((buckets, predicate) => {
// Loop through each ordering of the predicate
buckets.forEach((bucket, raReference) => {
// Format the key as demanding-query:name (ordering)
const key = makeKey(
event.queryCausingWork,
predicate,
`(${raReference})`
);
const maxTupleCount = Math.max(...bucket.tupleCounts);
const resultSize = bucket.resultSize;
const maxDependentPredicateSize = Math.max(
...bucket.dependentPredicateSizes.values()
);
if (maxDependentPredicateSize > 0) {
pushValue(this.maxTupleCountMap, key, maxTupleCount);
pushValue(this.resultSizeMap, key, resultSize);
pushValue(
this.maxDependentPredicateSizeMap,
key,
maxDependentPredicateSize
);
const metric = computeJoinOrderBadness(maxTupleCount, maxDependentPredicateSize, resultSize);
const oldMetric = this.joinOrderMetricMap.get(key);
if ((oldMetric === undefined) || (metric > oldMetric)) {
this.joinOrderMetricMap.set(key, metric);
}
}
});
});
break;
}
}
}
/**
* Iterate through an SCC with main node `event`.
*/
private iterateSCC(
event: ComputeRecursive,
func: (
inLayerEvent: ComputeRecursive | InLayer,
run: PipelineRun,
iteration: number
) => void
): void {
const sccEvents = this.layerEvents.get(event.raHash)!;
const nextPipeline: number[] = new Array(sccEvents.length).fill(0);
const maxIteration = Math.max(
...sccEvents.map(e => e.predicateIterationMillis.length)
);
for (let iteration = 0; iteration < maxIteration; ++iteration) {
// Loop through each predicate in this iteration
for (let predicate = 0; predicate < sccEvents.length; ++predicate) {
const inLayerEvent = sccEvents[predicate];
const iterationTime =
inLayerEvent.predicateIterationMillis.length <= iteration
? -1
: inLayerEvent.predicateIterationMillis[iteration];
if (iterationTime != -1) {
const run: PipelineRun =
inLayerEvent.pipelineRuns[nextPipeline[predicate]++];
func(inLayerEvent, run, iteration);
}
}
}
}
/**
* Compute the maximum tuple count and maximum dependent predicate size for a non-recursive pipeline
*/
private badnessInputsForNonRecursiveDelta(
pipelineRun: PipelineRun,
event: ComputeSimple
): { maxTupleCount: number; maxDependentPredicateSize: number } {
const dependentPredicateSizes = Object.values(event.dependencies).map(hash =>
this.predicateSizes.get(hash) ?? 0 // Should always be present, but zero is a safe default.
);
const maxDependentPredicateSize = safeMax(dependentPredicateSizes);
return {
maxTupleCount: safeMax(pipelineRun.counts),
maxDependentPredicateSize: maxDependentPredicateSize
};
}
private prevDeltaSizes(event: ComputeRecursive, predicate: string, i: number) {
// If an iteration isn't present in the map it means it was skipped because the optimizer
// inferred that it was empty. So its size is 0.
return this.curDeltaSizes(event, predicate, i - 1);
}
private curDeltaSizes(event: ComputeRecursive, predicate: string, i: number) {
// If an iteration isn't present in the map it means it was skipped because the optimizer
// inferred that it was empty. So its size is 0.
return (
this.layerEvents.get(event.raHash)?.find(x => x.predicateName === predicate)?.deltaSizes[i] ?? 0
);
}
/**
* Compute the metric dependent predicate sizes and the result size for a predicate in an SCC.
*/
private badnessInputsForLayer(
event: ComputeRecursive,
inLayerEvent: InLayer | ComputeRecursive,
raReference: string,
iteration: number
) {
const dependentPredicates = getDependentPredicates(
inLayerEvent.ra[raReference]
);
let dependentPredicateSizes: I.Map<string, number>;
// We treat the base case as a non-recursive pipeline. In that case, the dependent predicates are
// the dependencies of the base case and the cur_deltas.
if (raReference === 'base') {
dependentPredicateSizes = I.Map(
dependentPredicates.map((pred): [string, number] => {
// A base case cannot contain a `prev_delta`, but it can contain a `cur_delta`.
let size = 0;
if (pred.endsWith('#cur_delta')) {
size = this.curDeltaSizes(
event,
pred.slice(0, -'#cur_delta'.length),
iteration
);
} else {
const hash = event.dependencies[pred];
size = this.predicateSizes.get(hash)!;
}
return [pred, size];
})
);
} else {
// It's a non-base case in a recursive pipeline. In that case, the dependent predicates are
// only the prev_deltas.
dependentPredicateSizes = I.Map(
dependentPredicates
.flatMap(pred => {
// If it's actually a prev_delta
if (pred.endsWith('#prev_delta')) {
// Return the predicate without the #prev_delta suffix.
return [pred.slice(0, -'#prev_delta'.length)];
} else {
// Not a recursive delta. Skip it.
return [];
}
})
.map((prev): [string, number] => {
const size = this.prevDeltaSizes(event, prev, iteration);
return [prev, size];
})
);
}
const deltaSize = inLayerEvent.deltaSizes[iteration];
return { dependentPredicateSizes, deltaSize };
}
/**
* Compute the metric input for all the events in a SCC that starts with main node `event`
*/
private badnessInputsForRecursiveDelta(event: ComputeRecursive): Map<string, Map<string, Bucket>> {
// nameToOrderToBucket : predicate name -> ordering (i.e., standard, order_500000, etc.) -> bucket
const nameToOrderToBucket = new Map<string, Map<string, Bucket>>();
// Iterate through the SCC and compute the metric inputs
this.iterateSCC(event, (inLayerEvent, run, iteration) => {
const raReference = run.raReference;
const predicateName = inLayerEvent.predicateName;
if (!nameToOrderToBucket.has(predicateName)) {
nameToOrderToBucket.set(predicateName, new Map());
}
const orderTobucket = nameToOrderToBucket.get(predicateName)!;
if (!orderTobucket.has(raReference)) {
orderTobucket.set(raReference, {
tupleCounts: new Int32Array(0),
resultSize: 0,
dependentPredicateSizes: I.Map()
});
}
const { dependentPredicateSizes, deltaSize } = this.badnessInputsForLayer(
event,
inLayerEvent,
raReference,
iteration
);
const bucket = orderTobucket.get(raReference)!;
// Pointwise sum the tuple counts
const newTupleCounts = pointwiseSum(
bucket.tupleCounts,
new Int32Array(run.counts),
this.problemReporter
);
const resultSize = bucket.resultSize + deltaSize;
// Pointwise sum the deltas.
const newDependentPredicateSizes = bucket.dependentPredicateSizes.mergeWith(
(oldSize, newSize) => oldSize + newSize,
dependentPredicateSizes
);
orderTobucket.set(raReference, {
tupleCounts: newTupleCounts,
resultSize: resultSize,
dependentPredicateSizes: newDependentPredicateSizes
});
});
return nameToOrderToBucket;
}
}
export class JoinOrderScannerProvider implements EvaluationLogScannerProvider {
public createScanner(problemReporter: EvaluationLogProblemReporter): EvaluationLogScanner {
return new JoinOrderScanner(problemReporter, DEFAULT_WARNING_THRESHOLD);
}
}

View File

@@ -0,0 +1,23 @@
import * as fs from 'fs-extra';
/**
* Read a file consisting of multiple JSON objects. Each object is separated from the previous one
* by a double newline sequence. This is basically a more human-readable form of JSONL.
*
* The current implementation reads the entire text of the document into memory, but in the future
* it will stream the document to improve the performance with large documents.
*
* @param path The path to the file.
* @param handler Callback to be invoked for each top-level JSON object in order.
*/
export async function readJsonlFile(path: string, handler: (value: any) => Promise<void>): Promise<void> {
const logSummary = await fs.readFile(path, 'utf-8');
// Remove newline delimiters because summary is in .jsonl format.
const jsonSummaryObjects: string[] = logSummary.split(/\r?\n\r?\n/g);
for (const obj of jsonSummaryObjects) {
const jsonObj = JSON.parse(obj);
await handler(jsonObj);
}
}

View File

@@ -0,0 +1,109 @@
import { Diagnostic, DiagnosticSeverity, languages, Range, Uri } from 'vscode';
import { DisposableObject } from '../pure/disposable-object';
import { QueryHistoryManager } from '../query-history';
import { QueryHistoryInfo } from '../query-results';
import { EvaluationLogProblemReporter, EvaluationLogScannerSet } from './log-scanner';
import { PipelineInfo, SummarySymbols } from './summary-parser';
import * as fs from 'fs-extra';
import { logger } from '../logging';
/**
* Compute the key used to find a predicate in the summary symbols.
* @param name The name of the predicate.
* @param raHash The RA hash of the predicate.
* @returns The key of the predicate, consisting of `name@shortHash`, where `shortHash` is the first
* eight characters of `raHash`.
*/
function predicateSymbolKey(name: string, raHash: string): string {
return `${name}@${raHash.substring(0, 8)}`;
}
/**
* Implementation of `EvaluationLogProblemReporter` that generates `Diagnostic` objects to display
* in the VS Code "Problems" view.
*/
class ProblemReporter implements EvaluationLogProblemReporter {
public readonly diagnostics: Diagnostic[] = [];
constructor(private readonly symbols: SummarySymbols | undefined) {
}
public reportProblem(predicateName: string, raHash: string, iteration: number, message: string): void {
const nameWithHash = predicateSymbolKey(predicateName, raHash);
const predicateSymbol = this.symbols?.predicates[nameWithHash];
let predicateInfo: PipelineInfo | undefined = undefined;
if (predicateSymbol !== undefined) {
predicateInfo = predicateSymbol.iterations[iteration];
}
if (predicateInfo !== undefined) {
const range = new Range(predicateInfo.raStartLine, 0, predicateInfo.raEndLine + 1, 0);
this.diagnostics.push(new Diagnostic(range, message, DiagnosticSeverity.Error));
}
}
public log(message: string): void {
void logger.log(message);
}
}
export class LogScannerService extends DisposableObject {
public readonly scanners = new EvaluationLogScannerSet();
private readonly diagnosticCollection = this.push(languages.createDiagnosticCollection('ql-eval-log'));
private currentItem: QueryHistoryInfo | undefined = undefined;
constructor(qhm: QueryHistoryManager) {
super();
this.push(qhm.onDidChangeCurrentQueryItem(async (item) => {
if (item !== this.currentItem) {
this.currentItem = item;
await this.scanEvalLog(item);
}
}));
this.push(qhm.onDidCompleteQuery(async (item) => {
if (item === this.currentItem) {
await this.scanEvalLog(item);
}
}));
}
/**
* Scan the evaluation log for a query, and report any diagnostics.
*
* @param query The query whose log is to be scanned.
*/
public async scanEvalLog(
query: QueryHistoryInfo | undefined
): Promise<void> {
this.diagnosticCollection.clear();
if ((query?.t !== 'local')
|| (query.evalLogSummaryLocation === undefined)
|| (query.jsonEvalLogSummaryLocation === undefined)) {
return;
}
const diagnostics = await this.scanLog(query.jsonEvalLogSummaryLocation, query.evalLogSummarySymbolsLocation);
const uri = Uri.file(query.evalLogSummaryLocation);
this.diagnosticCollection.set(uri, diagnostics);
}
/**
* Scan the evaluator summary log for problems, using the scanners for all registered providers.
* @param jsonSummaryLocation The file path of the JSON summary log.
* @param symbolsLocation The file path of the symbols file for the human-readable log summary.
* @returns An array of `Diagnostic`s representing the problems found by scanners.
*/
private async scanLog(jsonSummaryLocation: string, symbolsLocation: string | undefined): Promise<Diagnostic[]> {
let symbols: SummarySymbols | undefined = undefined;
if (symbolsLocation !== undefined) {
symbols = JSON.parse(await fs.readFile(symbolsLocation, { encoding: 'utf-8' }));
}
const problemReporter = new ProblemReporter(symbols);
await this.scanners.scanLog(jsonSummaryLocation, problemReporter);
return problemReporter.diagnostics;
}
}

View File

@@ -0,0 +1,103 @@
import { SummaryEvent } from './log-summary';
import { readJsonlFile } from './jsonl-reader';
/**
* Callback interface used to report diagnostics from a log scanner.
*/
export interface EvaluationLogProblemReporter {
/**
* Report a potential problem detected in the evaluation log.
*
* @param predicateName The mangled name of the predicate with the problem.
* @param raHash The RA hash of the predicate with the problem.
* @param iteration The iteration number with the problem. For a non-recursive predicate, this
* must be zero.
* @param message The problem message.
*/
reportProblem(predicateName: string, raHash: string, iteration: number, message: string): void;
/**
* Log a message about a problem in the implementation of the scanner. These will typically be
* displayed separate from any problems reported via `reportProblem()`.
*/
log(message: string): void;
}
/**
* Interface implemented by a log scanner. Instances are created via
* `EvaluationLogScannerProvider.createScanner()`.
*/
export interface EvaluationLogScanner {
/**
* Called for each event in the log summary, in order. The implementation can report problems via
* the `EvaluationLogProblemReporter` interface that was supplied to `createScanner()`.
* @param event The log summary event.
*/
onEvent(event: SummaryEvent): void;
/**
* Called after all events in the log summary have been processed. The implementation can report
* problems via the `EvaluationLogProblemReporter` interface that was supplied to
* `createScanner()`.
*/
onDone(): void;
}
/**
* A factory for log scanners. When a log is to be scanned, all registered
* `EvaluationLogScannerProviders` will be asked to create a new instance of `EvaluationLogScanner`
* to do the scanning.
*/
export interface EvaluationLogScannerProvider {
/**
* Create a new instance of `EvaluationLogScanner` to scan a single summary log.
* @param problemReporter Callback interface for reporting any problems discovered.
*/
createScanner(problemReporter: EvaluationLogProblemReporter): EvaluationLogScanner;
}
/**
* Same as VSCode's `Disposable`, but avoids a dependency on VS Code.
*/
export interface Disposable {
dispose(): void;
}
export class EvaluationLogScannerSet {
private readonly scannerProviders = new Map<number, EvaluationLogScannerProvider>();
private nextScannerProviderId = 0;
/**
* Register a provider that can create instances of `EvaluationLogScanner` to scan evaluation logs
* for problems.
* @param provider The provider.
* @returns A `Disposable` that, when disposed, will unregister the provider.
*/
public registerLogScannerProvider(provider: EvaluationLogScannerProvider): Disposable {
const id = this.nextScannerProviderId;
this.nextScannerProviderId++;
this.scannerProviders.set(id, provider);
return {
dispose: () => {
this.scannerProviders.delete(id);
}
};
}
/**
* Scan the evaluator summary log for problems, using the scanners for all registered providers.
* @param jsonSummaryLocation The file path of the JSON summary log.
* @param problemReporter Callback interface for reporting any problems discovered.
*/
public async scanLog(jsonSummaryLocation: string, problemReporter: EvaluationLogProblemReporter): Promise<void> {
const scanners = [...this.scannerProviders.values()].map(p => p.createScanner(problemReporter));
await readJsonlFile(jsonSummaryLocation, async obj => {
scanners.forEach(scanner => {
scanner.onEvent(obj);
});
});
scanners.forEach(scanner => scanner.onDone());
}
}

View File

@@ -0,0 +1,93 @@
export interface PipelineRun {
raReference: string;
counts: number[];
duplicationPercentages: number[];
}
export interface Ra {
[key: string]: string[];
}
export type EvaluationStrategy =
'COMPUTE_SIMPLE' |
'COMPUTE_RECURSIVE' |
'IN_LAYER' |
'COMPUTED_EXTENSIONAL' |
'EXTENSIONAL' |
'SENTINEL_EMPTY' |
'CACHACA' |
'CACHE_HIT';
interface SummaryEventBase {
evaluationStrategy: EvaluationStrategy;
predicateName: string;
raHash: string;
appearsAs: { [key: string]: { [key: string]: number[] } };
completionType?: string;
}
interface ResultEventBase extends SummaryEventBase {
resultSize: number;
}
export interface ComputeSimple extends ResultEventBase {
evaluationStrategy: 'COMPUTE_SIMPLE';
ra: Ra;
pipelineRuns?: [PipelineRun];
queryCausingWork?: string;
dependencies: { [key: string]: string };
}
export interface ComputeRecursive extends ResultEventBase {
evaluationStrategy: 'COMPUTE_RECURSIVE';
deltaSizes: number[];
ra: Ra;
pipelineRuns: PipelineRun[];
queryCausingWork?: string;
dependencies: { [key: string]: string };
predicateIterationMillis: number[];
}
export interface InLayer extends ResultEventBase {
evaluationStrategy: 'IN_LAYER';
deltaSizes: number[];
ra: Ra;
pipelineRuns: PipelineRun[];
queryCausingWork?: string;
mainHash: string;
predicateIterationMillis: number[];
}
export interface ComputedExtensional extends ResultEventBase {
evaluationStrategy: 'COMPUTED_EXTENSIONAL';
queryCausingWork?: string;
}
export interface NonComputedExtensional extends ResultEventBase {
evaluationStrategy: 'EXTENSIONAL';
queryCausingWork?: string;
}
export interface SentinelEmpty extends SummaryEventBase {
evaluationStrategy: 'SENTINEL_EMPTY';
sentinelRaHash: string;
}
export interface Cachaca extends ResultEventBase {
evaluationStrategy: 'CACHACA';
}
export interface CacheHit extends ResultEventBase {
evaluationStrategy: 'CACHE_HIT';
}
export type Extensional = ComputedExtensional | NonComputedExtensional;
export type SummaryEvent =
| ComputeSimple
| ComputeRecursive
| InLayer
| Extensional
| SentinelEmpty
| Cachaca
| CacheHit;

View File

@@ -35,11 +35,11 @@ export class SummaryLanguageSupport extends DisposableObject {
* The last `TextDocument` (with language `ql-summary`) for which we tried to find a sourcemap, or
* `undefined` if we have not seen such a document yet.
*/
private lastDocument : TextDocument | undefined = undefined;
private lastDocument: TextDocument | undefined = undefined;
/**
* The sourcemap for `lastDocument`, or `undefined` if there was no such sourcemap or document.
*/
private sourceMap : SourceMapConsumer | undefined = undefined;
private sourceMap: SourceMapConsumer | undefined = undefined;
constructor() {
super();

View File

@@ -0,0 +1,113 @@
import * as fs from 'fs-extra';
/**
* Location information for a single pipeline invocation in the RA.
*/
export interface PipelineInfo {
startLine: number;
raStartLine: number;
raEndLine: number;
}
/**
* Location information for a single predicate in the RA.
*/
export interface PredicateSymbol {
/**
* `PipelineInfo` for each iteration. A non-recursive predicate will have a single iteration `0`.
*/
iterations: Record<number, PipelineInfo>;
}
/**
* Location information for the RA from an evaluation log. Line numbers point into the
* human-readable log summary.
*/
export interface SummarySymbols {
predicates: Record<string, PredicateSymbol>;
}
// Tuple counts for Expr::Expr::getParent#dispred#f0820431#ff@76d6745o:
const NON_RECURSIVE_TUPLE_COUNT_REGEXP = /^Evaluated relational algebra for predicate (?<predicateName>\S+) with tuple counts:$/;
// Tuple counts for Expr::Expr::getEnclosingStmt#f0820431#bf@923ddwj9 on iteration 0 running pipeline base:
const RECURSIVE_TUPLE_COUNT_REGEXP = /^Evaluated relational algebra for predicate (?<predicateName>\S+) on iteration (?<iteration>\d+) running pipeline (?<pipeline>\S+) with tuple counts:$/;
const RETURN_REGEXP = /^\s*return /;
/**
* Parse a human-readable evaluation log summary to find the location of the RA for each pipeline
* run.
*
* TODO: Once we're more certain about the symbol format, we should have the CLI generate this as it
* generates the human-readabe summary to avoid having to rely on regular expression matching of the
* human-readable text.
*
* @param summaryPath The path to the summary file.
* @param symbolsPath The path to the symbols file to generate.
*/
export async function generateSummarySymbolsFile(summaryPath: string, symbolsPath: string): Promise<void> {
const symbols = await generateSummarySymbols(summaryPath);
await fs.writeFile(symbolsPath, JSON.stringify(symbols));
}
/**
* Parse a human-readable evaluation log summary to find the location of the RA for each pipeline
* run.
*
* @param fileLocation The path to the summary file.
* @returns Symbol information for the summary file.
*/
async function generateSummarySymbols(summaryPath: string): Promise<SummarySymbols> {
const summary = await fs.promises.readFile(summaryPath, { encoding: 'utf-8' });
const symbols: SummarySymbols = {
predicates: {}
};
const lines = summary.split(/\r?\n/);
let lineNumber = 0;
while (lineNumber < lines.length) {
const startLineNumber = lineNumber;
lineNumber++;
const startLine = lines[startLineNumber];
const nonRecursiveMatch = startLine.match(NON_RECURSIVE_TUPLE_COUNT_REGEXP);
let predicateName: string | undefined = undefined;
let iteration = 0;
if (nonRecursiveMatch) {
predicateName = nonRecursiveMatch.groups!.predicateName;
} else {
const recursiveMatch = startLine.match(RECURSIVE_TUPLE_COUNT_REGEXP);
if (recursiveMatch?.groups) {
predicateName = recursiveMatch.groups.predicateName;
iteration = parseInt(recursiveMatch.groups.iteration);
}
}
if (predicateName !== undefined) {
const raStartLine = lineNumber;
let raEndLine: number | undefined = undefined;
while ((lineNumber < lines.length) && (raEndLine === undefined)) {
const raLine = lines[lineNumber];
const returnMatch = raLine.match(RETURN_REGEXP);
if (returnMatch) {
raEndLine = lineNumber;
}
lineNumber++;
}
if (raEndLine !== undefined) {
let symbol = symbols.predicates[predicateName];
if (symbol === undefined) {
symbol = {
iterations: {}
};
symbols.predicates[predicateName] = symbol;
}
symbol.iterations[iteration] = {
startLine: lineNumber,
raStartLine: raStartLine,
raEndLine: raEndLine
};
}
}
}
return symbols;
}

View File

@@ -1,3 +1,5 @@
import { readJsonlFile } from '../log-insights/jsonl-reader';
// TODO(angelapwen): Only load in necessary information and
// location in bytes for this log to save memory.
export interface EvalLogData {
@@ -11,16 +13,11 @@ export interface EvalLogData {
/**
* A pure method that parses a string of evaluator log summaries into
* an array of EvalLogData objects.
*
*/
export function parseViewerData(logSummary: string): EvalLogData[] {
// Remove newline delimiters because summary is in .jsonl format.
const jsonSummaryObjects: string[] = logSummary.split(/\r?\n\r?\n/g);
export async function parseViewerData(jsonSummaryPath: string): Promise<EvalLogData[]> {
const viewerData: EvalLogData[] = [];
for (const obj of jsonSummaryObjects) {
const jsonObj = JSON.parse(obj);
await readJsonlFile(jsonSummaryPath, async jsonObj => {
// Only convert log items that have an RA and millis field
if (jsonObj.ra !== undefined && jsonObj.millis !== undefined) {
const newLogData: EvalLogData = {
@@ -31,6 +28,7 @@ export function parseViewerData(logSummary: string): EvalLogData[] {
};
viewerData.push(newLogData);
}
}
});
return viewerData;
}

View File

@@ -654,7 +654,7 @@ export interface ClearCacheParams {
/**
* Parameters to start a new structured log
*/
export interface StartLogParams {
export interface StartLogParams {
/**
* The dataset for which we want to start a new structured log
*/
@@ -668,7 +668,7 @@ export interface ClearCacheParams {
/**
* Parameters to terminate a structured log
*/
export interface EndLogParams {
export interface EndLogParams {
/**
* The dataset for which we want to terminated the log
*/
@@ -1074,12 +1074,12 @@ export const compileUpgradeSequence = new rpc.RequestType<WithProgressId<Compile
/**
* Start a new structured log in the evaluator, terminating the previous one if it exists
*/
export const startLog = new rpc.RequestType<WithProgressId<StartLogParams>, StartLogResult, void, void>('evaluation/startLog');
export const startLog = new rpc.RequestType<WithProgressId<StartLogParams>, StartLogResult, void, void>('evaluation/startLog');
/**
* Terminate a structured log in the evaluator. Is a no-op if we aren't logging to the given location
*/
export const endLog = new rpc.RequestType<WithProgressId<EndLogParams>, EndLogResult, void, void>('evaluation/endLog');
export const endLog = new rpc.RequestType<WithProgressId<EndLogParams>, EndLogResult, void, void>('evaluation/endLog');
/**
* Clear the cache of a dataset

View File

@@ -9,6 +9,7 @@ import {
ProviderResult,
Range,
ThemeIcon,
TreeDataProvider,
TreeItem,
TreeView,
Uri,
@@ -47,6 +48,7 @@ import { WebviewReveal } from './interface-utils';
import { EvalLogViewer } from './eval-log-viewer';
import EvalLogTreeBuilder from './eval-log-tree-builder';
import { EvalLogData, parseViewerData } from './pure/log-summary-parser';
import { QueryWithResults } from './run-queries';
/**
* query-history.ts
@@ -114,7 +116,7 @@ const WORKSPACE_QUERY_HISTORY_FILE = 'workspace-query-history.json';
/**
* Tree data provider for the query history view.
*/
export class HistoryTreeDataProvider extends DisposableObject {
export class HistoryTreeDataProvider extends DisposableObject implements TreeDataProvider<QueryHistoryInfo> {
private _sortOrder = SortOrder.DateAsc;
private _onDidChangeTreeData = super.push(new EventEmitter<QueryHistoryInfo | undefined>());
@@ -122,6 +124,10 @@ export class HistoryTreeDataProvider extends DisposableObject {
readonly onDidChangeTreeData: Event<QueryHistoryInfo | undefined> = this
._onDidChangeTreeData.event;
private _onDidChangeCurrentQueryItem = super.push(new EventEmitter<QueryHistoryInfo | undefined>());
public readonly onDidChangeCurrentQueryItem = this._onDidChangeCurrentQueryItem.event;
private history: QueryHistoryInfo[] = [];
private failedIconPath: string;
@@ -260,7 +266,10 @@ export class HistoryTreeDataProvider extends DisposableObject {
}
setCurrentItem(item?: QueryHistoryInfo) {
this.current = item;
if (item !== this.current) {
this.current = item;
this._onDidChangeCurrentQueryItem.fire(item);
}
}
remove(item: QueryHistoryInfo) {
@@ -286,7 +295,7 @@ export class HistoryTreeDataProvider extends DisposableObject {
set allHistory(history: QueryHistoryInfo[]) {
this.history = history;
this.current = history[0];
this.setCurrentItem(history[0]);
this.refresh();
}
@@ -313,6 +322,12 @@ export class QueryHistoryManager extends DisposableObject {
queryHistoryScrubber: Disposable | undefined;
private queryMetadataStorageLocation;
private readonly _onDidChangeCurrentQueryItem = super.push(new EventEmitter<QueryHistoryInfo | undefined>());
readonly onDidChangeCurrentQueryItem = this._onDidChangeCurrentQueryItem.event;
private readonly _onDidCompleteQuery = super.push(new EventEmitter<LocalQueryInfo>());
readonly onDidCompleteQuery = this._onDidCompleteQuery.event;
constructor(
private readonly qs: QueryServerClient,
private readonly dbm: DatabaseManager,
@@ -345,6 +360,11 @@ export class QueryHistoryManager extends DisposableObject {
canSelectMany: true,
}));
// Forward any change of current history item from the tree data.
this.push(this.treeDataProvider.onDidChangeCurrentQueryItem((item) => {
this._onDidChangeCurrentQueryItem.fire(item);
}));
// Lazily update the tree view selection due to limitations of TreeView API (see
// `updateTreeViewSelectionIfVisible` doc for details)
this.push(
@@ -537,6 +557,11 @@ export class QueryHistoryManager extends DisposableObject {
this.registerToRemoteQueriesEvents();
}
public completeQuery(info: LocalQueryInfo, results: QueryWithResults): void {
info.completeThisQuery(results);
this._onDidCompleteQuery.fire(info);
}
private getCredentials() {
return Credentials.initialize(this.ctx);
}
@@ -926,7 +951,7 @@ export class QueryHistoryManager extends DisposableObject {
}
// Summary log file doesn't exist.
if (finalSingleItem.evalLogLocation && fs.pathExists(finalSingleItem.evalLogLocation)) {
if (finalSingleItem.evalLogLocation && await fs.pathExists(finalSingleItem.evalLogLocation)) {
// If raw log does exist, then the summary log is still being generated.
this.warnInProgressEvalLogSummary();
} else {
@@ -950,15 +975,14 @@ export class QueryHistoryManager extends DisposableObject {
return;
}
// TODO(angelapwen): Stream the file in.
void fs.readFile(finalSingleItem.jsonEvalLogSummaryLocation, async (err, buffer) => {
if (err) {
throw new Error(`Could not read evaluator log summary JSON file to generate viewer data at ${finalSingleItem.jsonEvalLogSummaryLocation}.`);
}
const evalLogData: EvalLogData[] = parseViewerData(buffer.toString());
// TODO(angelapwen): Stream the file in.
try {
const evalLogData: EvalLogData[] = await parseViewerData(finalSingleItem.jsonEvalLogSummaryLocation);
const evalLogTreeBuilder = new EvalLogTreeBuilder(finalSingleItem.getQueryName(), evalLogData);
this.evalLogViewer.updateRoots(await evalLogTreeBuilder.getRoots());
});
} catch (e) {
throw new Error(`Could not read evaluator log summary JSON file to generate viewer data at ${finalSingleItem.jsonEvalLogSummaryLocation}.`);
}
}
async handleCancel(

View File

@@ -218,6 +218,7 @@ export class LocalQueryInfo {
public evalLogLocation: string | undefined;
public evalLogSummaryLocation: string | undefined;
public jsonEvalLogSummaryLocation: string | undefined;
public evalLogSummarySymbolsLocation: string | undefined;
/**
* Note that in the {@link slurpQueryHistory} method, we create a FullQueryInfo instance
@@ -282,7 +283,7 @@ export class LocalQueryInfo {
return !!this.completedQuery;
}
completeThisQuery(info: QueryWithResults) {
completeThisQuery(info: QueryWithResults): void {
this.completedQuery = new CompletedQueryInfo(info);
// dispose of the cancellation token source and also ensure the source is not serialized as JSON

View File

@@ -271,6 +271,10 @@ export function findJsonQueryEvalLogSummaryFile(resultPath: string): string {
return path.join(resultPath, 'evaluator-log.summary.jsonl');
}
export function findQueryEvalLogSummarySymbolsFile(resultPath: string): string {
return path.join(resultPath, 'evaluator-log.summary.symbols.json');
}
export function findQueryEvalLogEndSummaryFile(resultPath: string): string {
return path.join(resultPath, 'evaluator-log-end.summary');
}
}

View File

@@ -125,6 +125,7 @@ export class RemoteQueriesInterfaceManager {
() => {
this.panel = undefined;
this.currentQueryId = undefined;
this.panelLoaded = false;
},
null,
ctx.subscriptions

View File

@@ -138,7 +138,7 @@ function generateMarkdownForCodeSnippet(
const codeLines = codeSnippet.text
.split('\n')
.map((line, index) =>
highlightCodeLines(line, index + snippetStartLine, highlightedRegion)
highlightAndEscapeCodeLines(line, index + snippetStartLine, highlightedRegion)
);
// Make sure there are no extra newlines before or after the <code> block:
@@ -153,20 +153,25 @@ function generateMarkdownForCodeSnippet(
return lines;
}
function highlightCodeLines(
function highlightAndEscapeCodeLines(
line: string,
lineNumber: number,
highlightedRegion?: HighlightedRegion
): string {
if (!highlightedRegion || !shouldHighlightLine(lineNumber, highlightedRegion)) {
return line;
return escapeHtmlCharacters(line);
}
const partiallyHighlightedLine = parseHighlightedLine(
line,
lineNumber,
highlightedRegion
);
return `${partiallyHighlightedLine.plainSection1}<strong>${partiallyHighlightedLine.highlightedSection}</strong>${partiallyHighlightedLine.plainSection2}`;
const plainSection1 = escapeHtmlCharacters(partiallyHighlightedLine.plainSection1);
const highlightedSection = escapeHtmlCharacters(partiallyHighlightedLine.highlightedSection);
const plainSection2 = escapeHtmlCharacters(partiallyHighlightedLine.plainSection2);
return `${plainSection1}<strong>${highlightedSection}</strong>${plainSection2}`;
}
function generateMarkdownForAlertMessage(
@@ -330,3 +335,10 @@ function createFileName(nwo: string) {
const [owner, repo] = nwo.split('/');
return `${owner}-${repo}`;
}
/**
* Escape characters that could be interpreted as HTML instead of raw code.
*/
function escapeHtmlCharacters(text: string): string {
return text.replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;');
}

View File

@@ -37,6 +37,7 @@ import { ensureMetadataIsComplete } from './query-results';
import { SELECT_QUERY_NAME } from './contextual/locationFinder';
import { DecodedBqrsChunk } from './pure/bqrs-cli-types';
import { getErrorMessage } from './pure/helpers-pure';
import { generateSummarySymbolsFile } from './log-insights/summary-parser';
/**
* run-queries.ts
@@ -107,6 +108,10 @@ export class QueryEvaluationInfo {
return qsClient.findJsonQueryEvalLogSummaryFile(this.querySaveDir);
}
get evalLogSummarySymbolsPath() {
return qsClient.findQueryEvalLogSummarySymbolsFile(this.querySaveDir);
}
get evalLogEndSummaryPath() {
return qsClient.findQueryEvalLogEndSummaryFile(this.querySaveDir);
}
@@ -206,6 +211,8 @@ export class QueryEvaluationInfo {
if (config.isCanary()) { // Generate JSON summary for viewer.
await qs.cliServer.generateJsonLogSummary(this.evalLogPath, this.jsonEvalLogSummaryPath);
queryInfo.jsonEvalLogSummaryLocation = this.jsonEvalLogSummaryPath;
await generateSummarySymbolsFile(this.evalLogSummaryPath, this.evalLogSummarySymbolsPath);
queryInfo.evalLogSummarySymbolsLocation = this.evalLogSummarySymbolsPath;
}
} else {
void showAndLogWarningMessage(`Failed to write structured evaluator log to ${this.evalLogPath}.`);
@@ -333,8 +340,8 @@ export class QueryEvaluationInfo {
}
/**
* Calls the appropriate CLI command to generate a human-readable log summary
* and logs to the Query Server console and query log file.
* Calls the appropriate CLI command to generate a human-readable log summary
* and logs to the Query Server console and query log file.
*/
displayHumanReadableLogSummary(queryInfo: LocalQueryInfo, qs: qsClient.QueryServerClient): void {
queryInfo.evalLogLocation = this.evalLogPath;

View File

@@ -44,7 +44,7 @@ const _10MB = _1MB * 10;
// CLI version to test. Hard code the latest as default. And be sure
// to update the env if it is not otherwise set.
const CLI_VERSION = process.env.CLI_VERSION || 'v2.10.2';
const CLI_VERSION = process.env.CLI_VERSION || 'v2.10.3';
process.env.CLI_VERSION = CLI_VERSION;
// Base dir where CLIs will be downloaded into

View File

@@ -2,108 +2,108 @@ import { expect } from 'chai';
import EvalLogTreeBuilder from '../../eval-log-tree-builder';
import { EvalLogData } from '../../pure/log-summary-parser';
describe('EvalLogTreeBuilder', () => {
it('should build the log tree roots', async () => {
const evalLogDataItems: EvalLogData[] = [
{
predicateName: 'quick_eval#query#ffffffff',
millis: 1,
resultSize: 596,
ra: {
pipeline: [
'{1} r1',
'{2} r2',
'return r2'
]
},
}
];
const expectedRoots = [
{
label: 'test-query.ql',
children: undefined
}
];
const expectedPredicate = [
{
label: 'quick_eval#query#ffffffff (596 tuples, 1 ms)',
children: undefined,
parent: undefined
},
];
const expectedRA = [
{
label: 'Pipeline: pipeline',
children: undefined,
parent: undefined
}
];
const expectedPipelineSteps = [{
label: '{1} r1',
children: [],
parent: undefined
describe('EvalLogTreeBuilder', () => {
it('should build the log tree roots', async () => {
const evalLogDataItems: EvalLogData[] = [
{
predicateName: 'quick_eval#query#ffffffff',
millis: 1,
resultSize: 596,
ra: {
pipeline: [
'{1} r1',
'{2} r2',
'return r2'
]
},
{
label: '{2} r2',
children: [],
parent: undefined
},
{
label: 'return r2',
children: [],
parent: undefined
}];
}
];
const builder = new EvalLogTreeBuilder('test-query.ql', evalLogDataItems);
const roots = await builder.getRoots();
// Force children, parent to be undefined for ease of testing.
expect(roots.map(
r => ({ ...r, children: undefined })
)).to.deep.eq(expectedRoots);
const expectedRoots = [
{
label: 'test-query.ql',
children: undefined
}
];
expect((roots[0].children.map(
pred => ({ ...pred, children: undefined, parent: undefined })
))).to.deep.eq(expectedPredicate);
const expectedPredicate = [
{
label: 'quick_eval#query#ffffffff (596 tuples, 1 ms)',
children: undefined,
parent: undefined
},
];
expect((roots[0].children[0].children.map(
ra => ({ ...ra, children: undefined, parent: undefined })
))).to.deep.eq(expectedRA);
// Pipeline steps' children should be empty so do not force undefined children here.
expect(roots[0].children[0].children[0].children.map(
step => ({ ...step, parent: undefined })
)).to.deep.eq(expectedPipelineSteps);
});
const expectedRA = [
{
label: 'Pipeline: pipeline',
children: undefined,
parent: undefined
}
];
it('should build the tree with descriptive message when no data exists', async () => {
// Force children, parent to be undefined for ease of testing.
const expectedRoots = [
{
label: 'test-query-cached.ql',
children: undefined
}
];
const expectedNoPredicates = [
{
label: 'No predicates evaluated in this query run.',
children: [], // Should be empty so do not force empty here.
parent: undefined
}
];
const builder = new EvalLogTreeBuilder('test-query-cached.ql', []);
const roots = await builder.getRoots();
const expectedPipelineSteps = [{
label: '{1} r1',
children: [],
parent: undefined
},
{
label: '{2} r2',
children: [],
parent: undefined
},
{
label: 'return r2',
children: [],
parent: undefined
}];
expect(roots.map(
r => ({ ...r, children: undefined })
)).to.deep.eq(expectedRoots);
const builder = new EvalLogTreeBuilder('test-query.ql', evalLogDataItems);
const roots = await builder.getRoots();
expect(roots[0].children.map(
noPreds => ({ ...noPreds, parent: undefined })
)).to.deep.eq(expectedNoPredicates);
});
});
// Force children, parent to be undefined for ease of testing.
expect(roots.map(
r => ({ ...r, children: undefined })
)).to.deep.eq(expectedRoots);
expect((roots[0].children.map(
pred => ({ ...pred, children: undefined, parent: undefined })
))).to.deep.eq(expectedPredicate);
expect((roots[0].children[0].children.map(
ra => ({ ...ra, children: undefined, parent: undefined })
))).to.deep.eq(expectedRA);
// Pipeline steps' children should be empty so do not force undefined children here.
expect(roots[0].children[0].children[0].children.map(
step => ({ ...step, parent: undefined })
)).to.deep.eq(expectedPipelineSteps);
});
it('should build the tree with descriptive message when no data exists', async () => {
// Force children, parent to be undefined for ease of testing.
const expectedRoots = [
{
label: 'test-query-cached.ql',
children: undefined
}
];
const expectedNoPredicates = [
{
label: 'No predicates evaluated in this query run.',
children: [], // Should be empty so do not force empty here.
parent: undefined
}
];
const builder = new EvalLogTreeBuilder('test-query-cached.ql', []);
const roots = await builder.getRoots();
expect(roots.map(
r => ({ ...r, children: undefined })
)).to.deep.eq(expectedRoots);
expect(roots[0].children.map(
noPreds => ({ ...noPreds, parent: undefined })
)).to.deep.eq(expectedNoPredicates);
});
});

View File

@@ -243,15 +243,16 @@ describe('telemetry reporting', function() {
});
});
it('should request permission if popup has never been seen before', async () => {
it('should request permission if popup has never been seen before', async function() {
this.timeout(3000);
sandbox.stub(window, 'showInformationMessage').resolvesArg(3 /* "yes" item */);
await ctx.globalState.update('telemetry-request-viewed', false);
await enableTelemetry('codeQL.telemetry', false);
await telemetryListener.initialize();
// Wait 50 ms for user's selection to propagate in settings.
await wait(50);
// Wait for user's selection to propagate in settings.
await wait(500);
// Dialog opened, user clicks "yes" and telemetry enabled
expect(window.showInformationMessage).to.have.been.calledOnce;

View File

@@ -564,6 +564,11 @@ repository:
keyword: 'query'
name: storage.modifier.query.ql
signature:
match:
keyword: 'signature'
name: storage.modifier.signature.ql
transient:
match:
keyword: 'transient'
@@ -583,8 +588,14 @@ repository:
- include: '#pragma'
- include: '#private'
- include: '#query'
- include: '#signature'
- include: '#transient'
implements:
match:
keyword: 'implements'
name: keyword.other.implements.ql
# A QL comment, regardless of form.
comment:
patterns:
@@ -668,6 +679,19 @@ repository:
- match: '(?#simple-id)'
name: entity.name.type.namespace.ql
# The `implements` clause of a module definition.
implements-clause:
beginPattern: '#implements'
# Ends at the first `{`.
# REVIEW: This could be any token not allowed in a type.
end: '(?= \{ )'
name: meta.block.implements-clause.ql
patterns:
- include: '#non-context-sensitive'
- match: '(?#simple-id)|(?#at-lower-id)'
name: entity.name.type.ql
# A `module` declaration, whether a module definition or an alias declaration.
module-declaration:
# Starts with the `module` keyword.
@@ -677,6 +701,7 @@ repository:
name: meta.block.module-declaration.ql
patterns:
- include: '#module-body'
- include: '#implements-clause'
- include: '#non-context-sensitive'
# Any simple-id within a module head is part of a module name.
- match: '(?#simple-id)'

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,45 @@
import { expect } from 'chai';
import 'mocha';
import { EvaluationLogProblemReporter, EvaluationLogScannerSet } from '../../src/log-insights/log-scanner';
import { JoinOrderScannerProvider } from '../../src/log-insights/join-order';
import * as path from 'path';
interface TestProblem {
predicateName: string;
raHash: string;
iteration: number;
message: string;
}
class TestProblemReporter implements EvaluationLogProblemReporter {
public readonly problems: TestProblem[] = [];
public reportProblem(predicateName: string, raHash: string, iteration: number, message: string): void {
this.problems.push({
predicateName,
raHash,
iteration,
message
});
}
public log(message: string): void {
console.log(message);
}
}
describe('log scanners', function() {
it('should detect bad join orders', async function() {
const scanners = new EvaluationLogScannerSet();
scanners.registerLogScannerProvider(new JoinOrderScannerProvider());
const summaryPath = path.join(__dirname, 'evaluator-log-summaries/bad-join-order.jsonl');
const problemReporter = new TestProblemReporter();
await scanners.scanLog(summaryPath, problemReporter);
expect(problemReporter.problems.length).to.equal(1);
expect(problemReporter.problems[0].predicateName).to.equal('#select#ff');
expect(problemReporter.problems[0].raHash).to.equal('1bb43c97jpmuh8r2v0f9hktim63');
expect(problemReporter.problems[0].iteration).to.equal(0);
expect(problemReporter.problems[0].message).to.equal('Relation \'#select#ff\' has an inefficient join order. Its join order metric is 4961.83, which is larger than the threshold of 50.00.');
});
});

View File

@@ -1,5 +1,4 @@
import { expect } from 'chai';
import * as fs from 'fs-extra';
import * as path from 'path';
import 'mocha';
@@ -8,8 +7,8 @@ import { parseViewerData } from '../../src/pure/log-summary-parser';
describe('Evaluator log summary tests', async function() {
describe('for a valid summary text', async function() {
it('should return only valid EvalLogData objects', async function() {
const validSummaryText = await fs.readFile(path.join(__dirname, 'evaluator-log-summaries/valid-summary.jsonl'), 'utf8');
const logDataItems = parseViewerData(validSummaryText.toString());
const validSummaryPath = path.join(__dirname, 'evaluator-log-summaries/valid-summary.jsonl');
const logDataItems = await parseViewerData(validSummaryPath);
expect(logDataItems).to.not.be.undefined;
expect(logDataItems.length).to.eq(3);
for (const item of logDataItems) {
@@ -27,14 +26,14 @@ describe('Evaluator log summary tests', async function() {
});
it('should not parse a summary header object', async function() {
const invalidHeaderText = await fs.readFile(path.join(__dirname, 'evaluator-log-summaries/invalid-header.jsonl'), 'utf8');
const logDataItems = parseViewerData(invalidHeaderText);
const invalidHeaderPath = path.join(__dirname, 'evaluator-log-summaries/invalid-header.jsonl');
const logDataItems = await parseViewerData(invalidHeaderPath);
expect(logDataItems.length).to.eq(0);
});
it('should not parse a log event missing RA or millis fields', async function() {
const invalidSummaryText = await fs.readFile(path.join(__dirname, 'evaluator-log-summaries/invalid-summary.jsonl'), 'utf8');
const logDataItems = parseViewerData(invalidSummaryText);
const invalidSummaryPath = path.join(__dirname, 'evaluator-log-summaries/invalid-summary.jsonl');
const logDataItems = await parseViewerData(invalidSummaryPath);
expect(logDataItems.length).to.eq(0);
});
});

View File

@@ -148,6 +148,34 @@
"endColumn": 57
},
"codeFlows": []
},
{
"message": {
"tokens": [
{
"t": "text",
"text": "This component is implicitly exported."
}
]
},
"shortDescription": "This component is implicitly exported.",
"fileLink": {
"fileLinkPrefix": "https://github.com/AlexRogalskiy/android-nrf-toolbox/blob/034cf3aa7d2a3a4145177de32546ca518a462a66",
"filePath": "app/src/main/AndroidManifest.xml"
},
"severity": "Warning",
"codeSnippet": {
"startLine": 237,
"endLine": 251,
"text": "\t\t</service>\n\n\t\t<activity\n\t\t\tandroid:name=\"no.nordicsemi.android.nrftoolbox.dfu.DfuInitiatorActivity\"\n\t\t\tandroid:label=\"@string/dfu_service_title\"\n\t\t\tandroid:noHistory=\"true\"\n\t\t\tandroid:theme=\"@style/AppTheme.Translucent\" >\n\t\t\t<intent-filter>\n\t\t\t\t<action android:name=\"no.nordicsemi.android.action.DFU_UPLOAD\" />\n\n\t\t\t\t<category android:name=\"android.intent.category.DEFAULT\" />\n\t\t\t</intent-filter>\n\t\t</activity>\n\n\t\t<service\n"
},
"highlightedRegion": {
"startLine": 239,
"startColumn": 3,
"endLine": 249,
"endColumn": 15
},
"codeFlows": []
}
]
}

View File

@@ -41,4 +41,4 @@ select t,
| Repository | Results |
| --- | --- |
| github/codeql | [1 result(s)](#file-github-codeql-md) |
| meteor/meteor | [4 result(s)](#file-meteor-meteor-md) |
| meteor/meteor | [5 result(s)](#file-meteor-meteor-md) |

View File

@@ -4,9 +4,9 @@
<pre><code class="javascript"> /g,hashElement);
*/
text = text.replace(/(\n\n[ ]{0,3}<!(--<strong>[^\r]*?</strong>--\s*)+>[ \t]*(?=\n{2,}))/g,hashElement);
text = text.replace(/(\n\n[ ]{0,3}&lt;!(--<strong>[^\r]*?</strong>--\s*)+&gt;[ \t]*(?=\n{2,}))/g,hashElement);
// PHP and ASP-style processor instructions (<?...?> and <%...%>)
// PHP and ASP-style processor instructions (&lt;?...?&gt; and &lt;%...%&gt;)
</code></pre>
*This part of the regular expression may cause exponential backtracking on strings containing many repetitions of '----'.*
@@ -17,7 +17,7 @@
<pre><code class="javascript"> // Build a regex to find HTML tags and comments. See Friedl's
// "Mastering Regular Expressions", 2nd Ed., pp. 200-201.
var regex = /(<[a-z\/!$]("[^"]*"|'[^']*'|[^'">])*>|<!(--<strong>.*?</strong>--\s*)+>)/gi;
var regex = /(&lt;[a-z\/!$]("[^"]*"|'[^']*'|[^'"&gt;])*&gt;|&lt;!(--<strong>.*?</strong>--\s*)+&gt;)/gi;
text = text.replace(regex, function(wholeMatch) {
</code></pre>
@@ -46,3 +46,26 @@ pp.strictDirective = function(start) {
*This part of the regular expression may cause exponential backtracking on strings starting with '"' and containing many repetitions of '\!'.*
----------------------------------------
[app/src/main/AndroidManifest.xml](https://github.com/AlexRogalskiy/android-nrf-toolbox/blob/034cf3aa7d2a3a4145177de32546ca518a462a66/app/src/main/AndroidManifest.xml#L239-L249)
<pre><code class="javascript"> &lt;/service&gt;
<strong>&lt;activity</strong>
<strong> android:name="no.nordicsemi.android.nrftoolbox.dfu.DfuInitiatorActivity"</strong>
<strong> android:label="@string/dfu_service_title"</strong>
<strong> android:noHistory="true"</strong>
<strong> android:theme="@style/AppTheme.Translucent" &gt;</strong>
<strong> &lt;intent-filter&gt;</strong>
<strong> &lt;action android:name="no.nordicsemi.android.action.DFU_UPLOAD" /&gt;</strong>
<strong></strong>
<strong> &lt;category android:name="android.intent.category.DEFAULT" /&gt;</strong>
<strong> &lt;/intent-filter&gt;</strong>
<strong> &lt;/activity&gt;</strong>
&lt;service
</code></pre>
*This component is implicitly exported.*
----------------------------------------

File diff suppressed because it is too large Load Diff