Merge branch 'main' into fix-CWE-119-tests

This commit is contained in:
Geoffrey White
2025-11-27 09:57:23 +00:00
committed by GitHub
2302 changed files with 170246 additions and 65192 deletions

3
.gitattributes vendored
View File

@@ -82,9 +82,6 @@
/csharp/paket.main.bzl linguist-generated=true
/csharp/paket.main_extension.bzl linguist-generated=true
# ripunzip tool
/misc/ripunzip/ripunzip-* filter=lfs diff=lfs merge=lfs -text
# swift prebuilt resources
/swift/third_party/resources/*.zip filter=lfs diff=lfs merge=lfs -text
/swift/third_party/resources/*.tar.zst filter=lfs diff=lfs merge=lfs -text

View File

@@ -1,4 +0,0 @@
When reviewing code:
* do not review changes in files with `.expected` extension (they are automatically ensured to be correct).
* in `.ql` and `.qll` files, do not try to review the code itself as you don't understand the programming language
well enough to make comments in these languages. You can still check for typos or comment improvements.

View File

@@ -0,0 +1,4 @@
---
applyTo: "**/*.expected"
---
Ignore these files. These are generated to be correct by running the tests, and should not be edited manually.

View File

@@ -0,0 +1,6 @@
---
applyTo: "**/*.ql,**/*.qll"
---
When reviewing these QL files, do not make any comment on the code itself. You don't understand the QL programming language well enough yet. You can only make comments about the following aspects:
* comments (lines starting with `//` or block comments enclosed in `/* ... */`): you can suggest improvements to the clarity of comments, or point out spelling mistakes
* typos in identifiers

View File

@@ -1,74 +0,0 @@
name: Build runzip
on:
workflow_dispatch:
inputs:
ripunzip-version:
description: "what reference to checktout from google/runzip"
required: false
default: v2.0.2
openssl-version:
description: "what reference to checkout from openssl/openssl for Linux"
required: false
default: openssl-3.5.0
jobs:
build:
strategy:
fail-fast: false
matrix:
os: [ubuntu-22.04, macos-13, windows-2022]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v5
with:
repository: google/ripunzip
ref: ${{ inputs.ripunzip-version }}
# we need to avoid ripunzip dynamically linking into libssl
# see https://github.com/sfackler/rust-openssl/issues/183
- if: runner.os == 'Linux'
name: checkout openssl
uses: actions/checkout@v5
with:
repository: openssl/openssl
path: openssl
ref: ${{ inputs.openssl-version }}
- if: runner.os == 'Linux'
name: build and install openssl with fPIC
shell: bash
working-directory: openssl
run: |
./config -fPIC --prefix=$HOME/.local --openssldir=$HOME/.local/ssl
make -j $(nproc)
make install_sw -j $(nproc)
- if: runner.os == 'Linux'
name: build (linux)
shell: bash
run: |
env OPENSSL_LIB_DIR=$HOME/.local/lib64 OPENSSL_INCLUDE_DIR=$HOME/.local/include OPENSSL_STATIC=yes cargo build --release
mv target/release/ripunzip ripunzip-linux
- if: runner.os == 'Windows'
name: build (windows)
shell: bash
run: |
cargo build --release
mv target/release/ripunzip ripunzip-windows
- name: build (macOS)
if: runner.os == 'macOS'
shell: bash
run: |
rustup target install x86_64-apple-darwin
rustup target install aarch64-apple-darwin
cargo build --target x86_64-apple-darwin --release
cargo build --target aarch64-apple-darwin --release
lipo -create -output ripunzip-macos \
-arch x86_64 target/x86_64-apple-darwin/release/ripunzip \
-arch arm64 target/aarch64-apple-darwin/release/ripunzip
- uses: actions/upload-artifact@v4
with:
name: ripunzip-${{ runner.os }}
path: ripunzip-*
- name: Check built binary
shell: bash
run: |
./ripunzip-* --version

View File

@@ -5,19 +5,29 @@
/actions/ @github/codeql-dynamic
/cpp/ @github/codeql-c-analysis
/csharp/ @github/codeql-csharp
/csharp/autobuilder/Semmle.Autobuild.Cpp @github/codeql-c-extractor
/csharp/autobuilder/Semmle.Autobuild.Cpp.Tests @github/codeql-c-extractor
/csharp/autobuilder/Semmle.Autobuild.Cpp @github/codeql-c-extractor @github/code-scanning-language-coverage
/csharp/autobuilder/Semmle.Autobuild.Cpp.Tests @github/codeql-c-extractor @github/code-scanning-language-coverage
/go/ @github/codeql-go
/go/codeql-tools/ @github/codeql-go @github/code-scanning-language-coverage
/go/downgrades/ @github/codeql-go @github/code-scanning-language-coverage
/go/extractor/ @github/codeql-go @github/code-scanning-language-coverage
/go/extractor-smoke-test/ @github/codeql-go @github/code-scanning-language-coverage
/go/ql/test/extractor-tests/ @github/codeql-go @github/code-scanning-language-coverage
/java/ @github/codeql-java
/javascript/ @github/codeql-javascript
/javascript/extractor/ @github/codeql-javascript @github/code-scanning-language-coverage
/python/ @github/codeql-python
/python/extractor/ @github/codeql-python @github/code-scanning-language-coverage
/ql/ @github/codeql-ql-for-ql-reviewers
/ruby/ @github/codeql-ruby
/ruby/extractor/ @github/codeql-ruby @github/code-scanning-language-coverage
/rust/ @github/codeql-rust
/rust/extractor/ @github/codeql-rust @github/code-scanning-language-coverage
/shared/ @github/codeql-shared-libraries-reviewers
/swift/ @github/codeql-swift
/swift/extractor/ @github/codeql-swift @github/code-scanning-language-coverage
/misc/codegen/ @github/codeql-swift
/java/kotlin-extractor/ @github/codeql-kotlin
/java/kotlin-extractor/ @github/codeql-kotlin @github/code-scanning-language-coverage
/java/ql/test-kotlin1/ @github/codeql-kotlin
/java/ql/test-kotlin2/ @github/codeql-kotlin

View File

@@ -10,4 +10,3 @@ members = [
"rust/ast-generator",
"rust/autobuild",
]
exclude = ["mad-generation-build"]

View File

@@ -19,8 +19,8 @@ bazel_dep(name = "rules_go", version = "0.56.1")
bazel_dep(name = "rules_pkg", version = "1.0.1")
bazel_dep(name = "rules_nodejs", version = "6.2.0-codeql.1")
bazel_dep(name = "rules_python", version = "0.40.0")
bazel_dep(name = "rules_shell", version = "0.3.0")
bazel_dep(name = "bazel_skylib", version = "1.7.1")
bazel_dep(name = "rules_shell", version = "0.5.0")
bazel_dep(name = "bazel_skylib", version = "1.8.1")
bazel_dep(name = "abseil-cpp", version = "20240116.1", repo_name = "absl")
bazel_dep(name = "nlohmann_json", version = "3.11.3", repo_name = "json")
bazel_dep(name = "fmt", version = "10.0.0")
@@ -28,7 +28,7 @@ bazel_dep(name = "rules_kotlin", version = "2.1.3-codeql.1")
bazel_dep(name = "gazelle", version = "0.40.0")
bazel_dep(name = "rules_dotnet", version = "0.19.2-codeql.1")
bazel_dep(name = "googletest", version = "1.14.0.bcr.1")
bazel_dep(name = "rules_rust", version = "0.63.0")
bazel_dep(name = "rules_rust", version = "0.66.0")
bazel_dep(name = "zstd", version = "1.5.5.bcr.1")
bazel_dep(name = "buildifier_prebuilt", version = "6.4.0", dev_dependency = True)
@@ -269,24 +269,16 @@ go_deps = use_extension("@gazelle//:extensions.bzl", "go_deps")
go_deps.from_file(go_mod = "//go/extractor:go.mod")
use_repo(go_deps, "org_golang_x_mod", "org_golang_x_tools")
lfs_archive = use_repo_rule("//misc/bazel:lfs.bzl", "lfs_archive")
ripunzip_archive = use_repo_rule("//misc/ripunzip:ripunzip.bzl", "ripunzip_archive")
lfs_archive(
name = "ripunzip-linux",
src = "//misc/ripunzip:ripunzip-Linux.zip",
build_file = "//misc/ripunzip:BUILD.ripunzip.bazel",
)
lfs_archive(
name = "ripunzip-windows",
src = "//misc/ripunzip:ripunzip-Windows.zip",
build_file = "//misc/ripunzip:BUILD.ripunzip.bazel",
)
lfs_archive(
name = "ripunzip-macos",
src = "//misc/ripunzip:ripunzip-macOS.zip",
build_file = "//misc/ripunzip:BUILD.ripunzip.bazel",
# go to https://github.com/GoogleChrome/ripunzip/releases to find latest version and corresponding sha256s
ripunzip_archive(
name = "ripunzip",
sha256_linux = "ee0e8a957687a5dc3a66b2a4b25883bf762df4c9c07f0651af527a32a405054b",
sha256_macos_arm = "8a88eea54eac232d162a72a42065e0429b82dbf4f05e9642915dff9d7a81f846",
sha256_macos_intel = "4457a18bfcc5feabe09f5ea3d1157128e07b4873392cb404a870e611924abf64",
sha256_windows = "66d0c1375301bf5ab815348048f43b110631d3fa7200acd50d50a8ed8655ca62",
version = "2.0.3",
)
register_toolchains(

View File

@@ -1,3 +1,11 @@
## 0.4.21
No user-facing changes.
## 0.4.20
No user-facing changes.
## 0.4.19
No user-facing changes.

View File

@@ -0,0 +1,3 @@
## 0.4.20
No user-facing changes.

View File

@@ -0,0 +1,3 @@
## 0.4.21
No user-facing changes.

View File

@@ -1,2 +1,2 @@
---
lastReleaseVersion: 0.4.19
lastReleaseVersion: 0.4.21

View File

@@ -100,8 +100,6 @@ private module ArgumentInjectionConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node source) { none() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
result = sink.getLocation()
or

View File

@@ -333,8 +333,6 @@ private module ArtifactPoisoningConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node source) { none() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
result = sink.getLocation()
or

View File

@@ -80,8 +80,6 @@ private module CodeInjectionConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node source) { none() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
result = sink.getLocation()
or

View File

@@ -130,8 +130,6 @@ private module EnvPathInjectionConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node source) { none() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
result = sink.getLocation()
or

View File

@@ -184,8 +184,6 @@ private module EnvVarInjectionConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node source) { none() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
result = sink.getLocation()
or

View File

@@ -212,8 +212,6 @@ private module OutputClobberingConfig implements DataFlow::ConfigSig {
}
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node sink) { none() }
}
/** Tracks flow of unsafe user input that is used to construct and evaluate an environment variable. */

View File

@@ -18,8 +18,6 @@ private module RequestForgeryConfig implements DataFlow::ConfigSig {
predicate isSink(DataFlow::Node sink) { sink instanceof RequestForgerySink }
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node sink) { none() }
}
/** Tracks flow of unsafe user input that is used to construct and evaluate a system command. */

View File

@@ -17,8 +17,6 @@ private module SecretExfiltrationConfig implements DataFlow::ConfigSig {
predicate isSink(DataFlow::Node sink) { sink instanceof SecretExfiltrationSink }
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node sink) { none() }
}
/** Tracks flow of unsafe user input that is used in a context where it may lead to a secret exfiltration. */

View File

@@ -1,5 +1,5 @@
name: codeql/actions-all
version: 0.4.19
version: 0.4.22-dev
library: true
warnOnImplicitThis: true
dependencies:

View File

@@ -1,3 +1,11 @@
## 0.6.13
No user-facing changes.
## 0.6.12
No user-facing changes.
## 0.6.11
No user-facing changes.

View File

@@ -26,8 +26,6 @@ private module MyConfig implements DataFlow::ConfigSig {
}
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node sink) { none() }
}
module MyFlow = TaintTracking::Global<MyConfig>;

View File

@@ -36,8 +36,6 @@ private module MyConfig implements DataFlow::ConfigSig {
}
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node sink) { none() }
}
module MyFlow = TaintTracking::Global<MyConfig>;

View File

@@ -27,8 +27,6 @@ private module MyConfig implements DataFlow::ConfigSig {
}
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node sink) { none() }
}
module MyFlow = TaintTracking::Global<MyConfig>;

View File

@@ -26,8 +26,6 @@ private module MyConfig implements DataFlow::ConfigSig {
}
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node sink) { none() }
}
module MyFlow = TaintTracking::Global<MyConfig>;

View File

@@ -36,8 +36,6 @@ private module MyConfig implements DataFlow::ConfigSig {
}
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node sink) { none() }
}
module MyFlow = TaintTracking::Global<MyConfig>;

View File

@@ -27,8 +27,6 @@ private module MyConfig implements DataFlow::ConfigSig {
}
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node sink) { none() }
}
module MyFlow = TaintTracking::Global<MyConfig>;

View File

@@ -0,0 +1,3 @@
## 0.6.12
No user-facing changes.

View File

@@ -0,0 +1,3 @@
## 0.6.13
No user-facing changes.

View File

@@ -1,2 +1,2 @@
---
lastReleaseVersion: 0.6.11
lastReleaseVersion: 0.6.13

View File

@@ -1,5 +1,5 @@
/**
* @name Artifact Poisoning (Path Traversal).
* @name Artifact Poisoning (Path Traversal)
* @description An attacker may be able to poison the workflow's artifacts and influence on consequent steps.
* @kind problem
* @problem.severity error

View File

@@ -1,5 +1,5 @@
name: codeql/actions-queries
version: 0.6.11
version: 0.6.14-dev
library: false
warnOnImplicitThis: true
groups: [actions, queries]

View File

@@ -9,6 +9,7 @@
"fragments": [
"/*- Compilations -*/",
"/*- External data -*/",
"/*- Overlay support -*/",
"/*- Files and folders -*/",
"/*- Diagnostic messages -*/",
"/*- Diagnostic messages: severity -*/",

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
description: Support expanded compilation argument lists
compatibility: full
compilation_expanded_args.rel: delete

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,2 @@
description: Fix decltype qualifier issue
compatibility: full

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,4 @@
description: Add databaseMetadata and overlayChangedFiles relations
compatibility: full
databaseMetadata.rel: delete
overlayChangedFiles.rel: delete

View File

@@ -1,3 +1,17 @@
## 6.1.0
### New Features
* New predicates `getAnExpandedArgument` and `getExpandedArgument` were added to the `Compilation` class, yielding compilation arguments after expansion of response files.
### Bug Fixes
* Improve performance of the range analysis in cases where it would otherwise take an exorbitant amount of time.
## 6.0.1
No user-facing changes.
## 6.0.0
### Breaking Changes

View File

@@ -0,0 +1,4 @@
---
category: minorAnalysis
---
* The class `DataFlow::FieldContent` now covers both `union` and `struct`/`class` types. A new predicate `FieldContent.getAField` has been added to access the union members associated with the `FieldContent`. The old `FieldContent` has been renamed to `NonUnionFieldContent`.

View File

@@ -0,0 +1,3 @@
## 6.0.1
No user-facing changes.

View File

@@ -0,0 +1,9 @@
## 6.1.0
### New Features
* New predicates `getAnExpandedArgument` and `getExpandedArgument` were added to the `Compilation` class, yielding compilation arguments after expansion of response files.
### Bug Fixes
* Improve performance of the range analysis in cases where it would otherwise take an exorbitant amount of time.

View File

@@ -1,2 +1,2 @@
---
lastReleaseVersion: 6.0.0
lastReleaseVersion: 6.1.0

View File

@@ -40,7 +40,7 @@ class KnownOpenSslEllipticCurveConstantAlgorithmInstance extends OpenSslAlgorith
result = this.(Call).getTarget().getName()
}
override Crypto::EllipticCurveFamilyType getEllipticCurveFamilyType() {
override Crypto::EllipticCurveType getEllipticCurveType() {
if
Crypto::ellipticCurveNameToKnownKeySizeAndFamilyMapping(this.getParsedEllipticCurveName(), _,
_)

View File

@@ -72,7 +72,7 @@ class KnownOpenSslHashConstantAlgorithmInstance extends OpenSslAlgorithmInstance
override OpenSslAlgorithmValueConsumer getAvc() { result = getterCall }
override Crypto::THashType getHashFamily() {
override Crypto::THashType getHashType() {
knownOpenSslConstantToHashFamilyType(this, result)
or
not knownOpenSslConstantToHashFamilyType(this, _) and result = Crypto::OtherHashType()

View File

@@ -0,0 +1,9 @@
extensions:
- addsTo:
pack: codeql/cpp-all
extensible: summaryModel
data: # namespace, type, subtypes, name, signature, ext, input, output, kind, provenance
- ["", "", False, "tolower", "", "", "Argument[0]", "ReturnValue", "taint", "manual"]
- ["std", "", False, "tolower", "", "", "Argument[0]", "ReturnValue", "taint", "manual"]
- ["", "", False, "toupper", "", "", "Argument[0]", "ReturnValue", "taint", "manual"]
- ["std", "", False, "toupper", "", "", "Argument[0]", "ReturnValue", "taint", "manual"]

View File

@@ -0,0 +1,7 @@
extensions:
- addsTo:
pack: codeql/cpp-all
extensible: summaryModel
data: # namespace, type, subtypes, name, signature, ext, input, output, kind, provenance
- ["", "", False, "iconv", "", "", "Argument[**1]", "Argument[**3]", "value", "manual"]

View File

@@ -1,5 +1,5 @@
name: codeql/cpp-all
version: 6.0.0
version: 6.1.1-dev
groups: cpp
dbscheme: semmlecode.cpp.dbscheme
extractor: cpp
@@ -21,3 +21,4 @@ dataExtensions:
- ext/deallocation/*.model.yml
- ext/allocation/*.model.yml
warnOnImplicitThis: true
compileForOverlayEval: true

View File

@@ -94,6 +94,25 @@ class Compilation extends @compilation {
*/
string getArgument(int i) { compilation_args(this, i, result) }
/**
* Gets an expanded argument passed to the extractor on this invocation.
*/
string getAnExpandedArgument() { result = this.getExpandedArgument(_) }
/**
* Gets the `i`th expanded argument passed to the extractor on this
* invocation.
*
* This is similar to `getArgument`, but for a `@someFile` argument, it
* includes the arguments from that file, rather than just taking the
* argument literally.
*/
string getExpandedArgument(int i) {
if exists(string arg | compilation_expanded_args(this, _, arg))
then compilation_expanded_args(this, i, result)
else result = this.getArgument(i)
}
/**
* Gets the total amount of CPU time spent processing all the files in the
* front-end and extractor.

View File

@@ -171,12 +171,14 @@ class Function extends Declaration, ControlFlowNode, AccessHolder, @function {
* Gets the nth parameter of this function. There is no result for the
* implicit `this` parameter, and there is no `...` varargs pseudo-parameter.
*/
pragma[nomagic]
Parameter getParameter(int n) { params(unresolveElement(result), underlyingElement(this), n, _) }
/**
* Gets a parameter of this function. There is no result for the implicit
* `this` parameter, and there is no `...` varargs pseudo-parameter.
*/
pragma[nomagic]
Parameter getAParameter() { params(unresolveElement(result), underlyingElement(this), _, _) }
/**

View File

@@ -144,14 +144,14 @@ class NameQualifiableElement extends Element, @namequalifiableelement {
class NameQualifyingElement extends Element, @namequalifyingelement {
/**
* Gets a name qualifier for which this is the qualifying namespace or
* user-defined type. For example: class `X` is the
* user-defined type, or decltype. For example: class `X` is the
* `NameQualifyingElement` and `X::` is the `NameQualifier`.
*/
NameQualifier getANameQualifier() {
namequalifiers(unresolveElement(result), _, underlyingElement(this), _)
}
/** Gets the name of this namespace or user-defined type. */
/** Gets the name of this namespace, user-defined type, or decltype. */
string getName() { none() }
}

View File

@@ -1146,7 +1146,7 @@ class DerivedType extends Type, @derivedtype {
* decltype(a) b;
* ```
*/
class Decltype extends Type {
class Decltype extends Type, NameQualifyingElement {
Decltype() { decltypes(underlyingElement(this), _, 0, _, _) }
override string getAPrimaryQlClass() { result = "Decltype" }
@@ -1187,7 +1187,7 @@ class Decltype extends Type {
override string toString() { result = "decltype(...)" }
override string getName() { none() }
override string getName() { result = "decltype(...)" }
override int getSize() { result = this.getBaseType().getSize() }
@@ -1247,7 +1247,7 @@ class TypeofType extends Type {
override string toString() { result = "typeof(...)" }
override string getName() { none() }
override string getName() { result = "typeof(...)" }
override int getSize() { result = this.getBaseType().getSize() }
@@ -1311,8 +1311,6 @@ class TypeofTypeType extends TypeofType {
Type getType() { type_operators(underlyingElement(this), unresolveElement(result), _, _) }
override string getAPrimaryQlClass() { result = "TypeofTypeType" }
override string toString() { result = "typeof(...)" }
}
/**
@@ -1394,7 +1392,7 @@ class IntrinsicTransformedType extends Type {
override Type resolveTypedefs() { result = this.getBaseType().resolveTypedefs() }
override string getName() { none() }
override string getName() { result = this.getIntrinsicName() + "(...)" }
override int getSize() { result = this.getBaseType().getSize() }

View File

@@ -380,18 +380,20 @@ private module LogicInput_v1 implements GuardsImpl::LogicInputSig {
GuardsInput::Expr getARead() { result = this.getAUse().getDef() }
}
class SsaWriteDefinition extends SsaDefinition instanceof ExplicitDefinition {
GuardsInput::Expr getDefinition() { result = super.getAssignedInstruction() }
class SsaExplicitWrite extends SsaDefinition instanceof ExplicitDefinition {
GuardsInput::Expr getValue() { result = super.getAssignedInstruction() }
}
class SsaPhiNode extends SsaDefinition instanceof PhiNode {
class SsaPhiDefinition extends SsaDefinition instanceof PhiNode {
predicate hasInputFromBlock(SsaDefinition inp, BasicBlock bb) {
super.hasInputFromBlock(inp, bb)
}
}
predicate parameterDefinition(GuardsInput::Parameter p, SsaDefinition def) {
def.isParameterDefinition(p)
class SsaParameterInit extends SsaDefinition {
SsaParameterInit() { this.isParameterDefinition(_) }
GuardsInput::Parameter getParameter() { this.isParameterDefinition(result) }
}
predicate additionalImpliesStep(
@@ -701,6 +703,7 @@ private class GuardConditionFromBinaryLogicalOperator extends GuardConditionImpl
)
}
pragma[nomagic]
override predicate comparesLt(
Cpp::Expr left, Cpp::Expr right, int k, boolean isLessThan, boolean testIsTrue
) {
@@ -711,6 +714,7 @@ private class GuardConditionFromBinaryLogicalOperator extends GuardConditionImpl
)
}
pragma[nomagic]
override predicate comparesLt(Cpp::Expr e, int k, boolean isLessThan, GuardValue value) {
exists(GuardValue partValue, GuardCondition part |
this.(Cpp::BinaryLogicalOperation)
@@ -736,6 +740,7 @@ private class GuardConditionFromBinaryLogicalOperator extends GuardConditionImpl
)
}
pragma[nomagic]
override predicate comparesEq(
Cpp::Expr left, Cpp::Expr right, int k, boolean areEqual, boolean testIsTrue
) {
@@ -755,6 +760,7 @@ private class GuardConditionFromBinaryLogicalOperator extends GuardConditionImpl
)
}
pragma[nomagic]
override predicate comparesEq(Cpp::Expr e, int k, boolean areEqual, GuardValue value) {
exists(GuardValue partValue, GuardCondition part |
this.(Cpp::BinaryLogicalOperation)

View File

@@ -15,16 +15,17 @@
* reading.
* 1. The `namespace` column selects a namespace.
* 2. The `type` column selects a type within that namespace. This column can
* introduce template names that can be mentioned in the `signature` column.
* introduce template type names that can be mentioned in the `signature` column.
* For example, `vector<T,Allocator>` introduces the template names `T` and
* `Allocator`.
* `Allocator`. Non-type template parameters cannot be specified.
* 3. The `subtypes` is a boolean that indicates whether to jump to an
* arbitrary subtype of that type. Set this to `false` if leaving the `type`
* blank (for example, a free function).
* 4. The `name` column optionally selects a specific named member of the type.
* Like the `type` column, this column can introduce template names that can
* be mentioned in the `signature` column. For example, `insert<InputIt>`
* introduces the template name `InputIt`.
* Like the `type` column, this column can introduce template type names
* that can be mentioned in the `signature` column. For example,
* `insert<InputIt>` introduces the template name `InputIt`. Non-type
* template parameters cannot be specified.
* 5. The `signature` column optionally restricts the named member. If
* `signature` is blank then no such filtering is done. The format of the
* signature is a comma-separated list of types enclosed in parentheses. The
@@ -633,6 +634,28 @@ string getParameterTypeWithoutTemplateArguments(Function f, int n, boolean canon
canonical = true
}
/**
* Gets the largest index of a template parameter of `templateFunction` that
* is a type template parameter.
*/
private int getLastTypeTemplateFunctionParameterIndex(Function templateFunction) {
result =
max(int index | templateFunction.getTemplateArgument(index) instanceof TypeTemplateParameter)
}
/** Gets the number of supported template parameters for `templateFunction`. */
private int getNumberOfSupportedFunctionTemplateArguments(Function templateFunction) {
result = count(int i | exists(getSupportedFunctionTemplateArgument(templateFunction, i)) | i)
}
/** Gets the `i`'th supported template parameter for `templateFunction`. */
private Locatable getSupportedFunctionTemplateArgument(Function templateFunction, int i) {
result = templateFunction.getTemplateArgument(i) and
// We don't yet support non-type template parameters in the middle of a
// template parameter list
i <= getLastTypeTemplateFunctionParameterIndex(templateFunction)
}
/**
* Normalize the `n`'th parameter of `f` by replacing template names
* with `func:N` (where `N` is the index of the template).
@@ -640,27 +663,51 @@ string getParameterTypeWithoutTemplateArguments(Function f, int n, boolean canon
private string getTypeNameWithoutFunctionTemplates(Function f, int n, int remaining) {
exists(Function templateFunction |
templateFunction = getFullyTemplatedFunction(f) and
remaining = templateFunction.getNumberOfTemplateArguments() and
remaining = getNumberOfSupportedFunctionTemplateArguments(templateFunction) and
result = getParameterTypeWithoutTemplateArguments(templateFunction, n, _)
)
or
exists(string mid, TypeTemplateParameter tp, Function templateFunction |
mid = getTypeNameWithoutFunctionTemplates(f, n, remaining + 1) and
templateFunction = getFullyTemplatedFunction(f) and
tp = templateFunction.getTemplateArgument(remaining) and
tp = getSupportedFunctionTemplateArgument(templateFunction, remaining)
|
result = mid.replaceAll(tp.getName(), "func:" + remaining.toString())
)
}
/**
* Gets the largest index of a template parameter of `templateClass` that
* is a type template parameter.
*/
private int getLastTypeTemplateClassParameterIndex(Class templateClass) {
result =
max(int index | templateClass.getTemplateArgument(index) instanceof TypeTemplateParameter)
}
/** Gets the `i`'th supported template parameter for `templateClass`. */
private Locatable getSupportedClassTemplateArgument(Class templateClass, int i) {
result = templateClass.getTemplateArgument(i) and
// We don't yet support non-type template parameters in the middle of a
// template parameter list
i <= getLastTypeTemplateClassParameterIndex(templateClass)
}
/** Gets the number of supported template parameters for `templateClass`. */
private int getNumberOfSupportedClassTemplateArguments(Class templateClass) {
result = count(int i | exists(getSupportedClassTemplateArgument(templateClass, i)) | i)
}
/**
* Normalize the `n`'th parameter of `f` by replacing template names
* with `class:N` (where `N` is the index of the template).
*/
pragma[nomagic]
private string getTypeNameWithoutClassTemplates(Function f, int n, int remaining) {
// If there is a declaring type then we start by expanding the function templates
exists(Class template |
isClassConstructedFrom(f.getDeclaringType(), template) and
remaining = template.getNumberOfTemplateArguments() and
remaining = getNumberOfSupportedClassTemplateArguments(template) and
result = getTypeNameWithoutFunctionTemplates(f, n, 0)
)
or
@@ -672,7 +719,8 @@ private string getTypeNameWithoutClassTemplates(Function f, int n, int remaining
exists(string mid, TypeTemplateParameter tp, Class template |
mid = getTypeNameWithoutClassTemplates(f, n, remaining + 1) and
isClassConstructedFrom(f.getDeclaringType(), template) and
tp = template.getTemplateArgument(remaining) and
tp = getSupportedClassTemplateArgument(template, remaining)
|
result = mid.replaceAll(tp.getName(), "class:" + remaining.toString())
)
}
@@ -727,6 +775,7 @@ private string getSignatureWithoutClassTemplateNames(
* - The `remaining` number of template arguments in `partiallyNormalizedSignature`
* with their index in `nameArgs`.
*/
pragma[nomagic]
private string getSignatureWithoutFunctionTemplateNames(
string partiallyNormalizedSignature, string typeArgs, string nameArgs, int remaining
) {
@@ -770,6 +819,7 @@ private string getSignatureWithoutFunctionTemplateNames(
* ```
* In this case, `normalizedSignature` will be `"(const func:0 &,int,class:1,class:0 *)"`.
*/
pragma[nomagic]
private predicate elementSpecWithArguments(
string signature, string type, string name, string normalizedSignature, string typeArgs,
string nameArgs
@@ -789,6 +839,35 @@ private string getSignatureParameterName(string signature, string type, string n
)
}
/**
* Gets a `Function` identified by the `(namespace, type, name)` components.
*
* If `subtypes` is `true` then the result may be an override of the function
* identified by the components.
*/
pragma[nomagic]
private Function getFunction(string namespace, string type, boolean subtypes, string name) {
elementSpec(namespace, type, subtypes, name, _, _) and
(
funcHasQualifiedName(result, namespace, name) and
subtypes = false and
type = ""
or
exists(Class namedClass, Class classWithMethod |
hasClassAndName(classWithMethod, result, name) and
classHasQualifiedName(namedClass, namespace, type)
|
// member declared in the named type or a subtype of it
subtypes = true and
classWithMethod = namedClass.getADerivedClass*()
or
// member declared directly in the named type
subtypes = false and
classWithMethod = namedClass
)
)
}
/**
* Holds if the suffix containing the entries in `signature` starting at entry
* `i` matches the suffix containing the parameters of `func` starting at entry `i`.
@@ -812,13 +891,17 @@ private string getSignatureParameterName(string signature, string type, string n
* is `func:n` then the signature name is compared with the `n`'th name
* in `name`.
*/
private predicate signatureMatches(Function func, string signature, string type, string name, int i) {
pragma[nomagic]
private predicate signatureMatches(
Function func, string namespace, string signature, string type, string name, int i
) {
func = getFunction(namespace, type, _, name) and
exists(string s |
s = getSignatureParameterName(signature, type, name, i) and
s = getParameterTypeName(func, i)
) and
if exists(getParameterTypeName(func, i + 1))
then signatureMatches(func, signature, type, name, i + 1)
then signatureMatches(func, namespace, signature, type, name, i + 1)
else i = count(signature.indexOf(","))
}
@@ -833,7 +916,7 @@ module ExternalFlowDebug {
*
* Exposed for testing purposes.
*/
predicate signatureMatches_debug = signatureMatches/5;
predicate signatureMatches_debug = signatureMatches/6;
/**
* INTERNAL: Do not use.
@@ -883,6 +966,7 @@ private predicate parseParens(string s, string betweenParens) { s = "(" + betwee
* - `signatureWithoutParens` equals `signature`, but with the surrounding
* parentheses removed.
*/
pragma[nomagic]
private predicate elementSpecWithArguments0(
string signature, string type, string name, string signatureWithoutParens, string typeArgs,
string nameArgs
@@ -909,7 +993,7 @@ private predicate elementSpecMatchesSignature(
) {
elementSpec(namespace, pragma[only_bind_into](type), subtypes, pragma[only_bind_into](name),
pragma[only_bind_into](signature), _) and
signatureMatches(func, signature, type, name, 0)
signatureMatches(func, namespace, signature, type, name, 0)
}
/**
@@ -953,7 +1037,7 @@ private predicate funcHasQualifiedName(Function func, string namespace, string n
* Holds if `namedClass` is in namespace `namespace` and has
* name `type` (excluding any template parameters).
*/
bindingset[type, namespace]
bindingset[type]
pragma[inline_late]
private predicate classHasQualifiedName(Class namedClass, string namespace, string type) {
exists(string typeWithoutArgs |
@@ -969,17 +1053,14 @@ private predicate classHasQualifiedName(Class namedClass, string namespace, stri
* are also returned.
* 3. The element has name `name`
* 4. If `signature` is non-empty, then the element has a list of parameter types described by `signature`.
*
* NOTE: `namespace` is currently not used (since we don't properly extract modules yet).
*/
pragma[nomagic]
private Element interpretElement0(
string namespace, string type, boolean subtypes, string name, string signature
) {
result = getFunction(namespace, type, subtypes, name) and
(
// Non-member functions
funcHasQualifiedName(result, namespace, name) and
subtypes = false and
type = "" and
(
elementSpecMatchesSignature(result, namespace, type, subtypes, name, signature)
@@ -989,52 +1070,36 @@ private Element interpretElement0(
)
or
// Member functions
exists(Class namedClass, Class classWithMethod |
hasClassAndName(classWithMethod, result, name) and
classHasQualifiedName(namedClass, namespace, type)
|
(
elementSpecMatchesSignature(result, namespace, type, subtypes, name, signature)
or
signature = "" and
elementSpec(namespace, type, subtypes, name, "", _)
) and
(
// member declared in the named type or a subtype of it
subtypes = true and
classWithMethod = namedClass.getADerivedClass*()
or
// member declared directly in the named type
subtypes = false and
classWithMethod = namedClass
)
)
elementSpecMatchesSignature(result, namespace, type, subtypes, name, signature)
or
elementSpec(namespace, type, subtypes, name, signature, _) and
// Member variables
signature = "" and
exists(Class namedClass, Class classWithMember, MemberVariable member |
member.getName() = name and
member = classWithMember.getAMember() and
namedClass.hasQualifiedName(namespace, type) and
result = member
|
// field declared in the named type or a subtype of it (or an extension of any)
subtypes = true and
classWithMember = namedClass.getADerivedClass*()
or
// field declared directly in the named type (or an extension of it)
subtypes = false and
classWithMember = namedClass
)
or
// Global or namespace variables
elementSpec(namespace, type, subtypes, name, signature, _) and
signature = "" and
type = "" and
subtypes = false and
result = any(GlobalOrNamespaceVariable v | v.hasQualifiedName(namespace, name))
elementSpec(namespace, type, subtypes, name, signature, _)
)
or
// Member variables
elementSpec(namespace, type, subtypes, name, signature, _) and
signature = "" and
exists(Class namedClass, Class classWithMember, MemberVariable member |
member.getName() = name and
member = classWithMember.getAMember() and
namedClass.hasQualifiedName(namespace, type) and
result = member
|
// field declared in the named type or a subtype of it (or an extension of any)
subtypes = true and
classWithMember = namedClass.getADerivedClass*()
or
// field declared directly in the named type (or an extension of it)
subtypes = false and
classWithMember = namedClass
)
or
// Global or namespace variables
elementSpec(namespace, type, subtypes, name, signature, _) and
signature = "" and
type = "" and
subtypes = false and
result = any(GlobalOrNamespaceVariable v | v.hasQualifiedName(namespace, name))
}
cached

View File

@@ -750,6 +750,16 @@ class SizeofPackTypeOperator extends SizeofPackOperator {
*/
class SizeofOperator extends Expr, @runtime_sizeof {
override int getPrecedence() { result = 16 }
/**
* Gets the contained type of this `sizeof`. For example,
* the result is `int` in both cases below:
* ```
* sizeof(int);
* sizeof(42);
* ```
*/
Type getTypeOperand() { none() } // overridden in subclasses
}
/**
@@ -766,6 +776,8 @@ class SizeofExprOperator extends SizeofOperator {
/** Gets the contained expression. */
Expr getExprOperand() { result = this.getChild(0) }
override Type getTypeOperand() { result = this.getExprOperand().getType() }
override string toString() { result = "sizeof(<expr>)" }
override predicate mayBeImpure() { this.getExprOperand().mayBeImpure() }
@@ -784,8 +796,7 @@ class SizeofTypeOperator extends SizeofOperator {
override string getAPrimaryQlClass() { result = "SizeofTypeOperator" }
/** Gets the contained type. */
Type getTypeOperand() { sizeof_bind(underlyingElement(this), unresolveElement(result)) }
override Type getTypeOperand() { sizeof_bind(underlyingElement(this), unresolveElement(result)) }
override string toString() { result = "sizeof(" + this.getTypeOperand().getName() + ")" }
@@ -842,6 +853,16 @@ class AlignofTypeOperator extends AlignofOperator {
*/
class DatasizeofOperator extends Expr, @datasizeof {
override int getPrecedence() { result = 16 }
/**
* Gets the contained type of this `__datasizeof`. For example,
* the result is `int` in both cases below:
* ```
* __datasizeof(int);
* __datasizeof(42);
* ```
*/
Type getTypeOperand() { none() }
}
/**
@@ -855,6 +876,8 @@ class DatasizeofExprOperator extends DatasizeofOperator {
/** Gets the contained expression. */
Expr getExprOperand() { result = this.getChild(0) }
override Type getTypeOperand() { result = this.getExprOperand().getType() }
override string toString() { result = "__datasizeof(<expr>)" }
override predicate mayBeImpure() { this.getExprOperand().mayBeImpure() }
@@ -870,8 +893,7 @@ class DatasizeofTypeOperator extends DatasizeofOperator {
override string getAPrimaryQlClass() { result = "DatasizeofTypeOperator" }
/** Gets the contained type. */
Type getTypeOperand() { sizeof_bind(underlyingElement(this), unresolveElement(result)) }
override Type getTypeOperand() { sizeof_bind(underlyingElement(this), unresolveElement(result)) }
override string toString() { result = "__datasizeof(" + this.getTypeOperand().getName() + ")" }

View File

@@ -861,6 +861,10 @@ predicate jumpStep(Node n1, Node n2) {
n2.(FlowSummaryNode).getSummaryNode())
}
bindingset[c]
pragma[inline_late]
private int getIndirectionIndexLate(Content c) { result = c.getIndirectionIndex() }
/**
* Holds if data can flow from `node1` to `node2` via an assignment to `f`.
* Thus, `node2` references an object with a field `f` that contains the
@@ -873,23 +877,17 @@ predicate jumpStep(Node n1, Node n2) {
predicate storeStepImpl(Node node1, Content c, Node node2, boolean certain) {
exists(
PostFieldUpdateNode postFieldUpdate, int indirectionIndex1, int numberOfLoads,
StoreInstruction store
StoreInstruction store, FieldContent fc
|
postFieldUpdate = node2 and
nodeHasInstruction(node1, store, pragma[only_bind_into](indirectionIndex1)) and
fc = c and
nodeHasInstruction(node1, pragma[only_bind_into](store),
pragma[only_bind_into](indirectionIndex1)) and
postFieldUpdate.getIndirectionIndex() = 1 and
numberOfLoadsFromOperand(postFieldUpdate.getFieldAddress(),
store.getDestinationAddressOperand(), numberOfLoads, certain)
|
exists(FieldContent fc | fc = c |
fc.getField() = postFieldUpdate.getUpdatedField() and
fc.getIndirectionIndex() = 1 + indirectionIndex1 + numberOfLoads
)
or
exists(UnionContent uc | uc = c |
uc.getAField() = postFieldUpdate.getUpdatedField() and
uc.getIndirectionIndex() = 1 + indirectionIndex1 + numberOfLoads
)
store.getDestinationAddressOperand(), numberOfLoads, certain) and
fc.getAField() = postFieldUpdate.getUpdatedField() and
getIndirectionIndexLate(fc) = 1 + indirectionIndex1 + numberOfLoads
)
or
// models-as-data summarized flow
@@ -965,22 +963,17 @@ predicate nodeHasInstruction(Node node, Instruction instr, int indirectionIndex)
* `node2`.
*/
predicate readStep(Node node1, ContentSet c, Node node2) {
exists(FieldAddress fa1, Operand operand, int numberOfLoads, int indirectionIndex2 |
exists(
FieldAddress fa1, Operand operand, int numberOfLoads, int indirectionIndex2, FieldContent fc
|
fc = c and
nodeHasOperand(node2, operand, indirectionIndex2) and
// The `1` here matches the `node2.getIndirectionIndex() = 1` conjunct
// in `storeStep`.
nodeHasOperand(node1, fa1.getObjectAddressOperand(), 1) and
numberOfLoadsFromOperand(fa1, operand, numberOfLoads, _)
|
exists(FieldContent fc | fc = c |
fc.getField() = fa1.getField() and
fc.getIndirectionIndex() = indirectionIndex2 + numberOfLoads
)
or
exists(UnionContent uc | uc = c |
uc.getAField() = fa1.getField() and
uc.getIndirectionIndex() = indirectionIndex2 + numberOfLoads
)
numberOfLoadsFromOperand(fa1, operand, numberOfLoads, _) and
fc.getAField() = fa1.getField() and
getIndirectionIndexLate(fc) = indirectionIndex2 + numberOfLoads
)
or
// models-as-data summarized flow
@@ -1574,7 +1567,7 @@ pragma[inline]
ContentApprox getContentApprox(Content c) {
exists(string prefix, Field f |
prefix = result.(FieldApproxContent).getPrefix() and
f = c.(FieldContent).getField() and
f = c.(NonUnionFieldContent).getField() and
fieldHasApproxName(f, prefix)
)
or

View File

@@ -2078,38 +2078,151 @@ predicate localExprFlow(Expr e1, Expr e2) {
localExprFlowPlus(e1, e2)
}
/**
* A canonical representation of a field.
*
* For performance reasons we want a unique `Content` that represents
* a given field across any template instantiation of a class.
*
* This is possible in _almost_ all cases, but there are cases where it is
* not possible to map between a field in the uninstantiated template to a
* field in the instantiated template. This happens in the case of local class
* definitions (because the local class is not the template that constructs
* the instantiation - it is the enclosing function). So this abstract class
* has two implementations: a non-local case (where we can represent a
* canonical field as the field declaration from an uninstantiated class
* template or a non-templated class), and a local case (where we simply use
* the field from the instantiated class).
*/
abstract private class CanonicalField extends Field {
/** Gets a field represented by this canonical field. */
abstract Field getAField();
/**
* Gets a class that declares a field represented by this canonical field.
*/
abstract Class getADeclaringType();
/**
* Gets a type that this canonical field may have. Note that this may
* not be a unique type. For example, consider this case:
* ```
* template<typename T>
* struct S { T x; };
*
* S<int> s1;
* S<char> s2;
* ```
* In this case the canonical field corresponding to `S::x` has two types:
* `int` and `char`.
*/
Type getAType() { result = this.getAField().getType() }
Type getAnUnspecifiedType() { result = this.getAType().getUnspecifiedType() }
}
private class NonLocalCanonicalField extends CanonicalField {
Class declaringType;
NonLocalCanonicalField() {
declaringType = this.getDeclaringType() and
not declaringType.isFromTemplateInstantiation(_) and
not declaringType.isLocal() // handled in LocalCanonicalField
}
override Field getAField() {
exists(Class c | result.getDeclaringType() = c |
// Either the declaring class of the field is a template instantiation
// that has been constructed from this canonical declaration
c.isConstructedFrom(declaringType) and
pragma[only_bind_out](result.getName()) = pragma[only_bind_out](this.getName())
or
// or this canonical declaration is not a template.
not c.isConstructedFrom(_) and
result = this
)
}
override Class getADeclaringType() {
result = this.getDeclaringType()
or
result.isConstructedFrom(this.getDeclaringType())
}
}
private class LocalCanonicalField extends CanonicalField {
Class declaringType;
LocalCanonicalField() {
declaringType = this.getDeclaringType() and
declaringType.isLocal()
}
override Field getAField() { result = this }
override Class getADeclaringType() { result = declaringType }
}
/**
* A canonical representation of a `Union`. See `CanonicalField` for the explanation for
* why we need a canonical representation.
*/
abstract private class CanonicalUnion extends Union {
/** Gets a union represented by this canonical union. */
abstract Union getAUnion();
/** Gets a canonical field of this canonical union. */
CanonicalField getACanonicalField() { result.getDeclaringType() = this }
}
private class NonLocalCanonicalUnion extends CanonicalUnion {
NonLocalCanonicalUnion() { not this.isFromTemplateInstantiation(_) and not this.isLocal() }
override Union getAUnion() {
result = this
or
result.isConstructedFrom(this)
}
}
private class LocalCanonicalUnion extends CanonicalUnion {
LocalCanonicalUnion() { this.isLocal() }
override Union getAUnion() { result = this }
}
bindingset[f]
pragma[inline_late]
private int getFieldSize(Field f) { result = f.getType().getSize() }
private int getFieldSize(CanonicalField f) { result = max(f.getAType().getSize()) }
/**
* Gets a field in the union `u` whose size
* is `bytes` number of bytes.
*/
private Field getAFieldWithSize(Union u, int bytes) {
result = u.getAField() and
private CanonicalField getAFieldWithSize(CanonicalUnion u, int bytes) {
result = u.getACanonicalField() and
bytes = getFieldSize(result)
}
cached
private newtype TContent =
TFieldContent(Field f, int indirectionIndex) {
// the indirection index for field content starts at 1 (because `TFieldContent` is thought of as
TNonUnionContent(CanonicalField f, int indirectionIndex) {
// the indirection index for field content starts at 1 (because `TNonUnionContent` is thought of as
// the address of the field, `FieldAddress` in the IR).
indirectionIndex = [1 .. SsaImpl::getMaxIndirectionsForType(f.getUnspecifiedType())] and
indirectionIndex = [1 .. max(SsaImpl::getMaxIndirectionsForType(f.getAnUnspecifiedType()))] and
// Reads and writes of union fields are tracked using `UnionContent`.
not f.getDeclaringType() instanceof Union
} or
TUnionContent(Union u, int bytes, int indirectionIndex) {
exists(Field f |
f = u.getAField() and
TUnionContent(CanonicalUnion u, int bytes, int indirectionIndex) {
exists(CanonicalField f |
f = u.getACanonicalField() and
bytes = getFieldSize(f) and
// We key `UnionContent` by the union instead of its fields since a write to one
// field can be read by any read of the union's fields. Again, the indirection index
// is 1-based (because 0 is considered the address).
indirectionIndex =
[1 .. max(SsaImpl::getMaxIndirectionsForType(getAFieldWithSize(u, bytes)
.getUnspecifiedType())
.getAnUnspecifiedType())
)]
)
} or
@@ -2124,14 +2237,14 @@ private newtype TContent =
*/
class Content extends TContent {
/** Gets a textual representation of this element. */
abstract string toString();
string toString() { none() } // overridden in subclasses
predicate hasLocationInfo(string path, int sl, int sc, int el, int ec) {
path = "" and sl = 0 and sc = 0 and el = 0 and ec = 0
}
/** Gets the indirection index of this `Content`. */
abstract int getIndirectionIndex();
int getIndirectionIndex() { none() } // overridden in subclasses
/**
* INTERNAL: Do not use.
@@ -2142,7 +2255,7 @@ class Content extends TContent {
* For example, a write to a field `f` implies that any content of
* the form `*f` is also cleared.
*/
abstract predicate impliesClearOf(Content c);
predicate impliesClearOf(Content c) { none() } // overridden in subclasses
}
/**
@@ -2162,37 +2275,62 @@ private module ContentStars {
private import ContentStars
/** A reference through a non-union instance field. */
private class TFieldContent = TNonUnionContent or TUnionContent;
/**
* A `Content` that references a `Field`. This may be a field of a `struct`,
* `class`, or `union`. In the case of a `union` there may be multiple fields
* associated with the same `Content`.
*/
class FieldContent extends Content, TFieldContent {
private Field f;
/** Gets a `Field` of this `Content`. */
Field getAField() { none() }
/**
* Gets the field associated with this `Content`, if a unique one exists.
*
* For fields from template instantiations this predicate may still return
* more than one field, but all the fields will be constructed from the same
* template.
*/
Field getField() { none() } // overridden in subclasses
override int getIndirectionIndex() { none() } // overridden in subclasses
override string toString() { none() } // overridden in subclasses
override predicate impliesClearOf(Content c) { none() } // overridden in subclasses
}
/** A reference through a non-union instance field. */
class NonUnionFieldContent extends FieldContent, TNonUnionContent {
private CanonicalField f;
private int indirectionIndex;
FieldContent() { this = TFieldContent(f, indirectionIndex) }
NonUnionFieldContent() { this = TNonUnionContent(f, indirectionIndex) }
override string toString() { result = contentStars(this) + f.toString() }
Field getField() { result = f }
final override Field getField() { result = f.getAField() }
override Field getAField() { result = this.getField() }
/** Gets the indirection index of this `FieldContent`. */
pragma[inline]
override int getIndirectionIndex() {
pragma[only_bind_into](result) = pragma[only_bind_out](indirectionIndex)
}
override int getIndirectionIndex() { result = indirectionIndex }
override predicate impliesClearOf(Content c) {
exists(FieldContent fc |
fc = c and
fc.getField() = f and
exists(int i |
c = TNonUnionContent(f, i) and
// If `this` is `f` then `c` is cleared if it's of the
// form `*f`, `**f`, etc.
fc.getIndirectionIndex() >= indirectionIndex
i >= indirectionIndex
)
}
}
/** A reference through an instance field of a union. */
class UnionContent extends Content, TUnionContent {
private Union u;
class UnionContent extends FieldContent, TUnionContent {
private CanonicalUnion u;
private int indirectionIndex;
private int bytes;
@@ -2200,27 +2338,31 @@ class UnionContent extends Content, TUnionContent {
override string toString() { result = contentStars(this) + u.toString() }
final override Field getField() { result = unique( | | u.getACanonicalField()).getAField() }
/** Gets a field of the underlying union of this `UnionContent`, if any. */
Field getAField() { result = u.getAField() and getFieldSize(result) = bytes }
/** Gets the underlying union of this `UnionContent`. */
Union getUnion() { result = u }
/** Gets the indirection index of this `UnionContent`. */
pragma[inline]
override int getIndirectionIndex() {
pragma[only_bind_into](result) = pragma[only_bind_out](indirectionIndex)
override Field getAField() {
exists(CanonicalField cf |
cf = u.getACanonicalField() and
result = cf.getAField() and
getFieldSize(cf) = bytes
)
}
/** Gets the underlying union of this `UnionContent`. */
Union getUnion() { result = u.getAUnion() }
/** Gets the indirection index of this `UnionContent`. */
override int getIndirectionIndex() { result = indirectionIndex }
override predicate impliesClearOf(Content c) {
exists(UnionContent uc |
uc = c and
uc.getUnion() = u and
exists(int i |
c = TUnionContent(u, _, i) and
// If `this` is `u` then `c` is cleared if it's of the
// form `*u`, `**u`, etc. (and we ignore `bytes` because
// we know the entire union is overwritten because it's a
// union).
uc.getIndirectionIndex() >= indirectionIndex
i >= indirectionIndex
)
}
}
@@ -2234,10 +2376,7 @@ class ElementContent extends Content, TElementContent {
ElementContent() { this = TElementContent(indirectionIndex) }
pragma[inline]
override int getIndirectionIndex() {
pragma[only_bind_into](result) = pragma[only_bind_out](indirectionIndex)
}
override int getIndirectionIndex() { result = indirectionIndex }
override predicate impliesClearOf(Content c) { none() }

View File

@@ -12,8 +12,8 @@ import semmle.code.cpp.models.interfaces.Taint
import semmle.code.cpp.models.interfaces.NonThrowing
/**
* The standard functions `memcpy`, `memmove` and `bcopy`; and the gcc variant
* `__builtin___memcpy_chk`.
* The standard functions `memcpy`, `memmove` and `bcopy`; and variants such as
* `__builtin___memcpy_chk` and `__builtin___memmove_chk`.
*/
private class MemcpyFunction extends ArrayFunction, DataFlowFunction, SideEffectFunction,
AliasFunction, NonCppThrowingFunction
@@ -27,7 +27,9 @@ private class MemcpyFunction extends ArrayFunction, DataFlowFunction, SideEffect
// bcopy(src, dest, num)
// mempcpy(dest, src, num)
// memccpy(dest, src, c, n)
this.hasGlobalName(["bcopy", mempcpy(), "memccpy", "__builtin___memcpy_chk"])
this.hasGlobalName([
"bcopy", mempcpy(), "memccpy", "__builtin___memcpy_chk", "__builtin___memmove_chk"
])
}
/**

View File

@@ -19,7 +19,8 @@ private class MemsetFunctionModel extends ArrayFunction, DataFlowFunction, Alias
this.hasGlobalOrStdName("wmemset")
or
this.hasGlobalName([
bzero(), "__builtin_memset", "__builtin_memset_chk", "RtlZeroMemory", "RtlSecureZeroMemory"
bzero(), "__builtin_memset", "__builtin_memset_chk", "__builtin___memset_chk",
"RtlZeroMemory", "RtlSecureZeroMemory"
])
}
@@ -32,7 +33,7 @@ private class MemsetFunctionModel extends ArrayFunction, DataFlowFunction, Alias
or
this.hasGlobalOrStdName("wmemset")
or
this.hasGlobalName(["__builtin_memset", "__builtin_memset_chk"])
this.hasGlobalName(["__builtin_memset", "__builtin_memset_chk", "__builtin___memset_chk"])
) and
result = 1
}

View File

@@ -30,7 +30,9 @@ class StrcatFunction extends TaintFunction, DataFlowFunction, ArrayFunction, Sid
"_mbsncat", // _mbsncat(dst, src, max_amount)
"_mbsncat_l", // _mbsncat_l(dst, src, max_amount, locale)
"_mbsnbcat", // _mbsnbcat(dest, src, count)
"_mbsnbcat_l" // _mbsnbcat_l(dest, src, count, locale)
"_mbsnbcat_l", // _mbsnbcat_l(dest, src, count, locale)
"__builtin___strcat_chk", // __builtin___strcat_chk (dest, src, magic)
"__builtin___strncat_chk" // __builtin___strncat_chk (dest, src, max_amount, magic)
])
}
@@ -56,7 +58,7 @@ class StrcatFunction extends TaintFunction, DataFlowFunction, ArrayFunction, Sid
override predicate hasTaintFlow(FunctionInput input, FunctionOutput output) {
(
this.getName() = ["strncat", "wcsncat", "_mbsncat", "_mbsncat_l"] and
this.getName() = ["strncat", "wcsncat", "_mbsncat", "_mbsncat_l", "__builtin___strncat_chk"] and
input.isParameter(2)
or
this.getName() = ["_mbsncat_l", "_mbsnbcat_l"] and

View File

@@ -36,7 +36,11 @@ class StrcpyFunction extends ArrayFunction, DataFlowFunction, TaintFunction, Sid
"_mbsnbcpy", // _mbsnbcpy(dest, src, max_amount)
"stpcpy", // stpcpy(dest, src)
"stpncpy", // stpncpy(dest, src, max_amount)
"strlcpy" // strlcpy(dst, src, dst_size)
"strlcpy", // strlcpy(dst, src, dst_size)
"__builtin___strcpy_chk", // __builtin___strcpy_chk (dest, src, magic)
"__builtin___stpcpy_chk", // __builtin___stpcpy_chk (dest, src, magic)
"__builtin___stpncpy_chk", // __builtin___stpncpy_chk(dest, src, max_amount, magic)
"__builtin___strncpy_chk" // __builtin___strncpy_chk (dest, src, max_amount, magic)
])
or
(

View File

@@ -1,10 +1,10 @@
import cpp
/**
* Describes whether a relation is 'strict' (that is, a `<` or `>`
* The strictness of a relation. Either 'strict' (that is, a `<` or `>`
* relation) or 'non-strict' (a `<=` or `>=` relation).
*/
newtype RelationStrictness =
newtype TRelationStrictness =
/**
* Represents that a relation is 'strict' (that is, a `<` or `>` relation).
*/
@@ -14,6 +14,19 @@ newtype RelationStrictness =
*/
Nonstrict()
/**
* The strictness of a relation. Either 'strict' (that is, a `<` or `>`
* relation) or 'non-strict' (a `<=` or `>=` relation).
*/
class RelationStrictness extends TRelationStrictness {
/** Gets the string representation of this relation strictness. */
string toString() {
this = Strict() and result = "strict"
or
this = Nonstrict() and result = "non-strict"
}
}
/**
* Describes whether a relation is 'greater' (that is, a `>` or `>=`
* relation) or 'lesser' (a `<` or `<=` relation).
@@ -105,10 +118,10 @@ predicate relOpWithSwap(
*
* This allows for the relation to be either as written, or with its
* arguments reversed; for example, if `rel` is `x < 5` then
* `relOpWithSwapAndNegate(rel, x, 5, Lesser(), Strict(), true)`,
* `relOpWithSwapAndNegate(rel, 5, x, Greater(), Strict(), true)`,
* `relOpWithSwapAndNegate(rel, x, 5, Greater(), Nonstrict(), false)` and
* `relOpWithSwapAndNegate(rel, 5, x, Lesser(), Nonstrict(), false)` hold.
* - `relOpWithSwapAndNegate(rel, x, 5, Lesser(), Strict(), true)`,
* - `relOpWithSwapAndNegate(rel, 5, x, Greater(), Strict(), true)`,
* - `relOpWithSwapAndNegate(rel, x, 5, Greater(), Nonstrict(), false)` and
* - `relOpWithSwapAndNegate(rel, 5, x, Lesser(), Nonstrict(), false)` hold.
*/
predicate relOpWithSwapAndNegate(
RelationalOperation rel, Expr a, Expr b, RelationDirection dir, RelationStrictness strict,

View File

@@ -93,31 +93,42 @@ private float wideningUpperBounds(ArithmeticType t) {
result = 1.0 / 0.0 // +Inf
}
/** Gets the widened lower bound for a given type and lower bound. */
bindingset[type, lb]
float widenLowerBound(Type type, float lb) {
result = max(float widenLB | widenLB = wideningLowerBounds(type) and widenLB <= lb | widenLB)
}
/** Gets the widened upper bound for a given type and upper bound. */
bindingset[type, ub]
float widenUpperBound(Type type, float ub) {
result = min(float widenUB | widenUB = wideningUpperBounds(type) and widenUB >= ub | widenUB)
}
/**
* Gets the value of the expression `e`, if it is a constant.
* This predicate also handles the case of constant variables initialized in different
* compilation units, which doesn't necessarily have a getValue() result from the extractor.
*/
private string getValue(Expr e) {
if exists(e.getValue())
then result = e.getValue()
else
/*
* It should be safe to propagate the initialization value to a variable if:
* The type of v is const, and
* The type of v is not volatile, and
* Either:
* v is a local/global variable, or
* v is a static member variable
*/
result = e.getValue()
or
not exists(e.getValue()) and
/*
* It should be safe to propagate the initialization value to a variable if:
* The type of v is const, and
* The type of v is not volatile, and
* Either:
* v is a local/global variable, or
* v is a static member variable
*/
exists(VariableAccess access, StaticStorageDurationVariable v |
not v.getUnderlyingType().isVolatile() and
v.getUnderlyingType().isConst() and
e = access and
v = access.getTarget() and
result = getValue(v.getAnAssignedValue())
)
exists(StaticStorageDurationVariable v |
not v.getUnderlyingType().isVolatile() and
v.getUnderlyingType().isConst() and
v = e.(VariableAccess).getTarget() and
result = getValue(v.getAnAssignedValue())
)
}
/**
@@ -505,6 +516,336 @@ private predicate isRecursiveExpr(Expr e) {
)
}
/**
* Provides predicates that estimate the number of bounds that the range
* analysis might produce.
*/
private module BoundsEstimate {
/**
* Gets the limit beyond which we enable widening. That is, if the estimated
* number of bounds exceeds this limit, we enable widening such that the limit
* will not be reached.
*/
float getBoundsLimit() {
// This limit is arbitrary, but low enough that it prevents timeouts on
// specific observed customer databases (and the in the tests).
result = 2.0.pow(40)
}
/** Gets the maximum number of bounds possible for `t` when widening is used. */
private int getNrOfWideningBounds(ArithmeticType t) {
result = strictcount(wideningLowerBounds(t)).maximum(strictcount(wideningUpperBounds(t)))
}
/**
* Holds if `boundFromGuard(guard, v, _, branch)` holds, but without
* relying on range analysis (which would cause non-monotonic recursion
* elsewhere).
*/
private predicate hasBoundFromGuard(Expr guard, VariableAccess v, boolean branch) {
exists(Expr lhs | linearAccess(lhs, v, _, _) |
relOpWithSwapAndNegate(guard, lhs, _, _, _, branch)
or
eqOpWithSwapAndNegate(guard, lhs, _, true, branch)
or
eqZeroWithNegate(guard, lhs, true, branch)
)
}
/** Holds if `def` is a guard phi node for `v` with a bound from a guard. */
predicate isGuardPhiWithBound(RangeSsaDefinition def, StackVariable v, VariableAccess access) {
exists(Expr guard, boolean branch |
def.isGuardPhi(v, access, guard, branch) and
hasBoundFromGuard(guard, access, branch)
)
}
/**
* Gets the number of bounds for `def` when `def` is a guard phi node for the
* variable `v`.
*/
language[monotonicAggregates]
private float nrOfBoundsPhiGuard(RangeSsaDefinition def, StackVariable v) {
// If we have
//
// if (x < c) { e1 }
// e2
//
// then `e2` is both a guard phi node (guarded by `x < c`) and a normal
// phi node (control is merged after the `if` statement).
//
// Assume `x` has `n` bounds. Then `n` bounds are propagated to the guard
// phi node `{ e1 }` and, since `{ e1 }` is input to `e2` as a normal phi
// node, `n` bounds are propagated to `e2`. If we also propagate the `n`
// bounds to `e2` as a guard phi node, then we square the number of
// bounds.
//
// However in practice `x < c` is going to cut down the number of bounds:
// The tracked bounds can't flow to both branches as that would require
// them to simultaneously be greater and smaller than `c`. To approximate
// this better, the contribution from a guard phi node that is also a
// normal phi node is 1.
exists(def.getAPhiInput(v)) and
isGuardPhiWithBound(def, v, _) and
result = 1
or
not exists(def.getAPhiInput(v)) and
// If there's different `access`es, then they refer to the same variable
// with the same lower bounds. Hence adding these guards make no sense (the
// implementation will take the union, but they'll be removed by
// deduplication). Hence we use `max` as an approximation.
result =
max(VariableAccess access | isGuardPhiWithBound(def, v, access) | nrOfBoundsExpr(access))
or
def.isPhiNode(v) and
not isGuardPhiWithBound(def, v, _) and
result = 0
}
/**
* Gets the number of bounds for `def` when `def` is a normal phi node for the
* variable `v`.
*/
language[monotonicAggregates]
private float nrOfBoundsPhiNormal(RangeSsaDefinition def, StackVariable v) {
result =
strictsum(RangeSsaDefinition inputDef |
inputDef = def.getAPhiInput(v)
|
nrOfBoundsDef(inputDef, v)
)
or
def.isPhiNode(v) and
not exists(def.getAPhiInput(v)) and
result = 0
}
/**
* Gets the number of bounds for `def` when `def` is an NE phi node for the
* variable `v`.
*/
language[monotonicAggregates]
float nrOfBoundsNEPhi(RangeSsaDefinition def, StackVariable v) {
// If there's different `access`es, then they refer to the same variable
// with the same lower bounds. Hence adding these guards make no sense (the
// implementation will take the union, but they'll be removed by
// deduplication). Hence we use `max` as an approximation.
result = max(VariableAccess access | isNEPhi(v, def, access, _) | nrOfBoundsExpr(access))
or
def.isPhiNode(v) and
not isNEPhi(v, def, _, _) and
result = 0
}
/**
* Gets the number of bounds for `def` when `def` is an unsupported guard phi
* node for the variable `v`.
*/
language[monotonicAggregates]
private float nrOfBoundsUnsupportedGuardPhi(RangeSsaDefinition def, StackVariable v) {
// If there's different `access`es, then they refer to the same variable
// with the same lower bounds. Hence adding these guards make no sense (the
// implementation will take the union, but they'll be removed by
// deduplication). Hence we use `max` as an approximation.
result =
max(VariableAccess access | isUnsupportedGuardPhi(v, def, access) | nrOfBoundsExpr(access))
or
def.isPhiNode(v) and
not isUnsupportedGuardPhi(v, def, _) and
result = 0
}
private float nrOfBoundsPhi(RangeSsaDefinition def, StackVariable v) {
// The cases for phi nodes are not mutually exclusive. For instance a phi
// node can be both a guard phi node and a normal phi node. To handle this
// we sum the contributions from the different cases.
result =
nrOfBoundsPhiGuard(def, v) + nrOfBoundsPhiNormal(def, v) + nrOfBoundsNEPhi(def, v) +
nrOfBoundsUnsupportedGuardPhi(def, v)
}
/** Gets the estimated number of bounds for `def` and `v`. */
float nrOfBoundsDef(RangeSsaDefinition def, StackVariable v) {
// Recursive definitions are already widened, so we simply estimate them as
// having the number of widening bounds available. This is crucial as it
// ensures that we don't follow recursive cycles when calculating the
// estimate. Had that not been the case the estimate itself would be at risk
// of causing performance issues and being non-functional.
if isRecursiveDef(def, v)
then result = getNrOfWideningBounds(getVariableRangeType(v))
else (
// Definitions with a defining value
exists(Expr defExpr | assignmentDef(def, v, defExpr) and result = nrOfBoundsExpr(defExpr))
or
// Assignment operations with a defining value
exists(AssignOperation assignOp |
def = assignOp and
assignOp.getLValue() = v.getAnAccess() and
result = nrOfBoundsExpr(assignOp)
)
or
// Phi nodes
result = nrOfBoundsPhi(def, v)
or
unanalyzableDefBounds(def, v, _, _) and result = 1
)
}
/**
* Gets a naive estimate of the number of bounds for `e`.
*
* The estimate is like an abstract interpretation of the range analysis,
* where the abstract value is the number of bounds. For instance,
* `nrOfBoundsExpr(12) = 1` and `nrOfBoundsExpr(x + y) = nrOfBoundsExpr(x) *
* nrOfBoundsExpr(y)`.
*
* The estimated number of bounds will usually be greater than the actual
* number of bounds, as the estimate can not detect cases where bounds are cut
* down when tracked precisely. For instance, in
* ```c
* int x = 1;
* if (cond) { x = 1; }
* int y = x + x;
* ```
* the actual number of bounds for `y` is 1. However, the estimate will be 4
* as the conditional assignment to `x` gives two bounds for `x` on the last
* line and the addition gives 2 * 2 bounds. There are two sources of inaccuracies:
*
* 1. Without tracking the lower bounds we can't see that `x` is assigned a
* value that is equal to its lower bound.
* 2. Had the conditional assignment been `x = 2` then the estimate of two
* bounds for `x` would have been correct. However, the estimate of 4 for `y`
* would still be incorrect. Summing the actual bounds `{1,2}` with itself
* gives `{2,3,4}` which is only three bounds. Again, we can't realise this
* without tracking the bounds.
*
* Since these inaccuracies compound the estimated number of bounds can often
* be _much_ greater than the actual number of bounds. Do note though that the
* estimate is not _guaranteed_ to be an upper bound. In some cases the
* approximations might underestimate the number of bounds.
*
* This predicate is functional. This is crucial as:
*
* - It ensures that the computing the estimate itself is fast.
* - Our use of monotonic aggregates assumes functionality.
*
* Any non-functional case should be considered a bug.
*/
float nrOfBoundsExpr(Expr e) {
// Similarly to what we do for definitions, we do not attempt to measure the
// number of bounds for recursive expressions.
if isRecursiveExpr(e)
then result = getNrOfWideningBounds(e.getUnspecifiedType())
else
if analyzableExpr(e)
then
// The cases here are an abstraction of and mirrors the cases inside
// `getLowerBoundsImpl`/`getUpperBoundsImpl`.
result = 1 and exists(getValue(e).toFloat())
or
exists(Expr operand | result = nrOfBoundsExpr(operand) |
effectivelyMultipliesByPositive(e, operand, _)
or
effectivelyMultipliesByNegative(e, operand, _)
)
or
exists(ConditionalExpr condExpr |
e = condExpr and
result = nrOfBoundsExpr(condExpr.getThen()) * nrOfBoundsExpr(condExpr.getElse())
)
or
exists(BinaryOperation binop |
e = binop and
result = nrOfBoundsExpr(binop.getLeftOperand()) * nrOfBoundsExpr(binop.getRightOperand())
|
e instanceof MaxExpr or
e instanceof MinExpr or
e instanceof AddExpr or
e instanceof SubExpr or
e instanceof UnsignedMulExpr or
e instanceof UnsignedBitwiseAndExpr
)
or
exists(AssignExpr assign | e = assign and result = nrOfBoundsExpr(assign.getRValue()))
or
exists(AssignArithmeticOperation assignOp |
e = assignOp and
result = nrOfBoundsExpr(assignOp.getLValue()) * nrOfBoundsExpr(assignOp.getRValue())
|
e instanceof AssignAddExpr or
e instanceof AssignSubExpr or
e instanceof UnsignedAssignMulExpr
)
or
// Handles `AssignMulByPositiveConstantExpr` and `AssignMulByNegativeConstantExpr`
exists(AssignMulByConstantExpr mulExpr |
e = mulExpr and
result = nrOfBoundsExpr(mulExpr.getLValue())
)
or
// Handles the prefix and postfix increment and decrement operators.
exists(CrementOperation crementOp |
e = crementOp and result = nrOfBoundsExpr(crementOp.getOperand())
)
or
exists(RemExpr remExpr | e = remExpr | result = nrOfBoundsExpr(remExpr.getRightOperand()))
or
exists(Conversion convExpr |
e = convExpr and
if convExpr.getUnspecifiedType() instanceof BoolType
then result = 1
else result = nrOfBoundsExpr(convExpr.getExpr())
)
or
exists(RangeSsaDefinition def, StackVariable v |
e = def.getAUse(v) and
result = nrOfBoundsDef(def, v) and
// Avoid returning two numbers when `e` is a use with a constant value.
not exists(getValue(e).toFloat())
)
or
exists(RShiftExpr rsExpr |
e = rsExpr and
exists(getValue(rsExpr.getRightOperand().getFullyConverted()).toInt()) and
result = nrOfBoundsExpr(rsExpr.getLeftOperand())
)
else (
exists(exprMinVal(e)) and result = 1
)
}
}
/**
* Holds if `v` is a variable for which widening should be used, as otherwise a
* very large number of bounds might be generated during the range analysis for
* `v`.
*/
private predicate varHasTooManyBounds(StackVariable v) {
exists(RangeSsaDefinition def |
def.getAVariable() = v and
BoundsEstimate::nrOfBoundsDef(def, v) > BoundsEstimate::getBoundsLimit()
)
}
/**
* Holds if `e` is an expression for which widening should be used, as otherwise
* a very large number of bounds might be generated during the range analysis
* for `e`.
*/
private predicate exprHasTooManyBounds(Expr e) {
BoundsEstimate::nrOfBoundsExpr(e) > BoundsEstimate::getBoundsLimit()
or
// A subexpressions of an expression with too many bounds may itself not have
// to many bounds. For instance, `x + y` can have too many bounds without `x`
// having as well. But in these cases, still want to consider `e` as having
// too many bounds since:
// - The overall result is widened anyway, so widening `e` as well is unlikely
// to cause further precision loss.
// - The number of bounds could be very large but still below the arbitrary
// limit. Hence widening `e` can improve performance.
exists(Expr pe | exprHasTooManyBounds(pe) and e.getParent() = pe)
}
/**
* Holds if `binop` is a binary operation that's likely to be assigned a
* quadratic (or more) number of candidate bounds during the analysis. This can
@@ -655,13 +996,8 @@ private float getTruncatedLowerBounds(Expr expr) {
if exprMinVal(expr) <= newLB and newLB <= exprMaxVal(expr)
then
// Apply widening where we might get a combinatorial explosion.
if isRecursiveBinary(expr)
then
result =
max(float widenLB |
widenLB = wideningLowerBounds(expr.getUnspecifiedType()) and
not widenLB > newLB
)
if isRecursiveBinary(expr) or exprHasTooManyBounds(expr)
then result = widenLowerBound(expr.getUnspecifiedType(), newLB)
else result = newLB
else result = exprMinVal(expr)
) and
@@ -714,13 +1050,8 @@ private float getTruncatedUpperBounds(Expr expr) {
if exprMinVal(expr) <= newUB and newUB <= exprMaxVal(expr)
then
// Apply widening where we might get a combinatorial explosion.
if isRecursiveBinary(expr)
then
result =
min(float widenUB |
widenUB = wideningUpperBounds(expr.getUnspecifiedType()) and
not widenUB < newUB
)
if isRecursiveBinary(expr) or exprHasTooManyBounds(expr)
then result = widenUpperBound(expr.getUnspecifiedType(), newUB)
else result = newUB
else result = exprMaxVal(expr)
)
@@ -890,7 +1221,7 @@ private float getLowerBoundsImpl(Expr expr) {
// equal to `min(-y + 1,y - 1)`.
exists(float childLB |
childLB = getFullyConvertedLowerBounds(remExpr.getAnOperand()) and
not childLB >= 0
childLB < 0
|
result = getFullyConvertedLowerBounds(remExpr.getRightOperand()) - 1
or
@@ -1102,8 +1433,7 @@ private float getUpperBoundsImpl(Expr expr) {
// adding `-rhsLB` to the set of upper bounds.
exists(float rhsLB |
rhsLB = getFullyConvertedLowerBounds(remExpr.getRightOperand()) and
not rhsLB >= 0
|
rhsLB < 0 and
result = -rhsLB + 1
)
)
@@ -1248,8 +1578,7 @@ private float getPhiLowerBounds(StackVariable v, RangeSsaDefinition phi) {
exists(VariableAccess access, Expr guard, boolean branch, float defLB, float guardLB |
phi.isGuardPhi(v, access, guard, branch) and
lowerBoundFromGuard(guard, access, guardLB, branch) and
defLB = getFullyConvertedLowerBounds(access)
|
defLB = getFullyConvertedLowerBounds(access) and
// Compute the maximum of `guardLB` and `defLB`.
if guardLB > defLB then result = guardLB else result = defLB
)
@@ -1273,8 +1602,7 @@ private float getPhiUpperBounds(StackVariable v, RangeSsaDefinition phi) {
exists(VariableAccess access, Expr guard, boolean branch, float defUB, float guardUB |
phi.isGuardPhi(v, access, guard, branch) and
upperBoundFromGuard(guard, access, guardUB, branch) and
defUB = getFullyConvertedUpperBounds(access)
|
defUB = getFullyConvertedUpperBounds(access) and
// Compute the minimum of `guardUB` and `defUB`.
if guardUB < defUB then result = guardUB else result = defUB
)
@@ -1438,8 +1766,7 @@ private predicate upperBoundFromGuard(Expr guard, VariableAccess v, float ub, bo
}
/**
* This predicate simplifies the results returned by
* `linearBoundFromGuard`.
* This predicate simplifies the results returned by `linearBoundFromGuard`.
*/
private predicate boundFromGuard(
Expr guard, VariableAccess v, float boundValue, boolean isLowerBound,
@@ -1447,22 +1774,10 @@ private predicate boundFromGuard(
) {
exists(float p, float q, float r, boolean isLB |
linearBoundFromGuard(guard, v, p, q, r, isLB, strictness, branch) and
boundValue = (r - q) / p
|
boundValue = (r - q) / p and
// If the multiplier is negative then the direction of the comparison
// needs to be flipped.
p > 0 and isLowerBound = isLB
or
p < 0 and isLowerBound = isLB.booleanNot()
)
or
// When `!e` is true, we know that `0 <= e <= 0`
exists(float p, float q, Expr e |
linearAccess(e, v, p, q) and
eqZeroWithNegate(guard, e, true, branch) and
boundValue = (0.0 - q) / p and
isLowerBound = [false, true] and
strictness = Nonstrict()
if p < 0 then isLowerBound = isLB.booleanNot() else isLowerBound = isLB
)
}
@@ -1472,54 +1787,57 @@ private predicate boundFromGuard(
* lower or upper bound for `v`.
*/
private predicate linearBoundFromGuard(
ComparisonOperation guard, VariableAccess v, float p, float q, float boundValue,
Expr guard, VariableAccess v, float p, float q, float r,
boolean isLowerBound, // Is this a lower or an upper bound?
RelationStrictness strictness, boolean branch // Which control-flow branch is this bound valid on?
) {
// For the comparison x < RHS, we create two bounds:
//
// 1. x < upperbound(RHS)
// 2. x >= typeLowerBound(RHS.getUnspecifiedType())
//
exists(Expr lhs, Expr rhs, RelationDirection dir, RelationStrictness st |
linearAccess(lhs, v, p, q) and
relOpWithSwapAndNegate(guard, lhs, rhs, dir, st, branch)
|
isLowerBound = directionIsGreater(dir) and
strictness = st and
getBounds(rhs, boundValue, isLowerBound)
exists(Expr lhs | linearAccess(lhs, v, p, q) |
// For the comparison x < RHS, we create the following bounds:
// 1. x < upperbound(RHS)
// 2. x >= typeLowerBound(RHS.getUnspecifiedType())
exists(Expr rhs, RelationDirection dir, RelationStrictness st |
relOpWithSwapAndNegate(guard, lhs, rhs, dir, st, branch)
|
isLowerBound = directionIsGreater(dir) and
strictness = st and
r = getBounds(rhs, isLowerBound)
or
isLowerBound = directionIsLesser(dir) and
strictness = Nonstrict() and
r = getExprTypeBounds(rhs, isLowerBound)
)
or
isLowerBound = directionIsLesser(dir) and
strictness = Nonstrict() and
exprTypeBounds(rhs, boundValue, isLowerBound)
)
or
// For x == RHS, we create the following bounds:
//
// 1. x <= upperbound(RHS)
// 2. x >= lowerbound(RHS)
//
exists(Expr lhs, Expr rhs |
linearAccess(lhs, v, p, q) and
eqOpWithSwapAndNegate(guard, lhs, rhs, true, branch) and
getBounds(rhs, boundValue, isLowerBound) and
// For x == RHS, we create the following bounds:
// 1. x <= upperbound(RHS)
// 2. x >= lowerbound(RHS)
exists(Expr rhs |
eqOpWithSwapAndNegate(guard, lhs, rhs, true, branch) and
r = getBounds(rhs, isLowerBound) and
strictness = Nonstrict()
)
or
// When `x` is equal to 0 we create the following bounds:
// 1. x <= 0
// 2. x >= 0
eqZeroWithNegate(guard, lhs, true, branch) and
r = 0.0 and
isLowerBound = [false, true] and
strictness = Nonstrict()
)
// x != RHS and !x are handled elsewhere
}
/** Get the fully converted lower or upper bounds of `expr` based on `isLowerBound`. */
private float getBounds(Expr expr, boolean isLowerBound) {
isLowerBound = true and result = getFullyConvertedLowerBounds(expr)
or
isLowerBound = false and result = getFullyConvertedUpperBounds(expr)
}
/** Utility for `linearBoundFromGuard`. */
private predicate getBounds(Expr expr, float boundValue, boolean isLowerBound) {
isLowerBound = true and boundValue = getFullyConvertedLowerBounds(expr)
private float getExprTypeBounds(Expr expr, boolean isLowerBound) {
isLowerBound = true and result = exprMinVal(expr.getFullyConverted())
or
isLowerBound = false and boundValue = getFullyConvertedUpperBounds(expr)
}
/** Utility for `linearBoundFromGuard`. */
private predicate exprTypeBounds(Expr expr, float boundValue, boolean isLowerBound) {
isLowerBound = true and boundValue = exprMinVal(expr.getFullyConverted())
or
isLowerBound = false and boundValue = exprMaxVal(expr.getFullyConverted())
isLowerBound = false and result = exprMaxVal(expr.getFullyConverted())
}
/**
@@ -1810,18 +2128,12 @@ module SimpleRangeAnalysisInternal {
|
// Widening: check whether the new lower bound is from a source which
// depends recursively on the current definition.
if isRecursiveDef(def, v)
if isRecursiveDef(def, v) or varHasTooManyBounds(v)
then
// The new lower bound is from a recursive source, so we round
// down to one of a limited set of values to prevent the
// recursion from exploding.
result =
max(float widenLB |
widenLB = wideningLowerBounds(getVariableRangeType(v)) and
not widenLB > truncatedLB
|
widenLB
)
result = widenLowerBound(getVariableRangeType(v), truncatedLB)
else result = truncatedLB
)
or
@@ -1840,18 +2152,12 @@ module SimpleRangeAnalysisInternal {
|
// Widening: check whether the new upper bound is from a source which
// depends recursively on the current definition.
if isRecursiveDef(def, v)
if isRecursiveDef(def, v) or varHasTooManyBounds(v)
then
// The new upper bound is from a recursive source, so we round
// up to one of a fixed set of values to prevent the recursion
// from exploding.
result =
min(float widenUB |
widenUB = wideningUpperBounds(getVariableRangeType(v)) and
not widenUB < truncatedUB
|
widenUB
)
result = widenUpperBound(getVariableRangeType(v), truncatedUB)
else result = truncatedUB
)
or
@@ -1859,4 +2165,60 @@ module SimpleRangeAnalysisInternal {
// bound is `typeUpperBound`.
defMightOverflowNegatively(def, v) and result = varMaxVal(v)
}
/** Gets the estimate of the number of bounds for `e`. */
float estimateNrOfBounds(Expr e) { result = BoundsEstimate::nrOfBoundsExpr(e) }
}
/** Provides predicates for debugging the simple range analysis library. */
private module Debug {
Locatable getRelevantLocatable() {
exists(string filepath, int startline |
result.getLocation().hasLocationInfo(filepath, startline, _, _, _) and
filepath.matches("%/test.c") and
startline = [621 .. 639]
)
}
float debugGetLowerBoundsImpl(Expr e) {
e = getRelevantLocatable() and
result = getLowerBoundsImpl(e)
}
float debugGetUpperBoundsImpl(Expr e) {
e = getRelevantLocatable() and
result = getUpperBoundsImpl(e)
}
/**
* Counts the number of lower bounds for a given expression. This predicate is
* useful for identifying performance issues in the range analysis.
*/
predicate countGetLowerBoundsImpl(Expr e, int n) {
e = getRelevantLocatable() and
n = strictcount(float lb | lb = getLowerBoundsImpl(e) | lb)
}
float debugNrOfBounds(Expr e) {
e = getRelevantLocatable() and
result = BoundsEstimate::nrOfBoundsExpr(e)
}
/**
* Finds any expressions for which `nrOfBounds` is not functional. The result
* should be empty, so this predicate is useful to debug non-functional cases.
*/
int nonFunctionalNrOfBounds(Expr e) {
strictcount(BoundsEstimate::nrOfBoundsExpr(e)) > 1 and
result = BoundsEstimate::nrOfBoundsExpr(e)
}
/**
* Holds if `e` is an expression that has a lower bound, but where
* `nrOfBounds` does not compute an estimate.
*/
predicate missingNrOfBounds(Expr e, float n) {
n = lowerBound(e) and
not exists(BoundsEstimate::nrOfBoundsExpr(e))
}
}

View File

@@ -1,3 +1,4 @@
/*- Compilations -*/
/**
@@ -47,6 +48,19 @@ compilation_args(
string arg : string ref
);
/**
* The expanded arguments that were passed to the extractor for a
* compiler invocation. This is similar to `compilation_args`, but
* for a `@someFile` argument, it includes the arguments from that
* file, rather than just taking the argument literally.
*/
#keyset[id, num]
compilation_expanded_args(
int id : @compilation ref,
int num : int ref,
string arg : string ref
);
/**
* Optionally, record the build mode for each compilation.
*/
@@ -1327,7 +1341,8 @@ specialnamequalifyingelements(
@namequalifiableelement = @expr | @namequalifier;
@namequalifyingelement = @namespace
| @specialnamequalifyingelement
| @usertype;
| @usertype
| @decltype;
namequalifiers(
unique int id: @namequalifier,
@@ -2364,6 +2379,24 @@ link_parent(
int link_target : @link_target ref
);
/**
* The CLI will automatically emit applicable tuples for this table,
* such as `databaseMetadata("isOverlay", "true")` when building an
* overlay database.
*/
databaseMetadata(
string metadataKey: string ref,
string value: string ref
);
/**
* The CLI will automatically emit tuples for each new/modified/deleted file
* when building an overlay database.
*/
overlayChangedFiles(
string path: string ref
);
/*- XML Files -*/
xmlEncoding(

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,2 @@
description: Add databaseMetadata and overlayChangedFiles relations
compatibility: full

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,2 @@
description: Support expanded compilation argument lists
compatibility: backwards

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,2 @@
description: Fix decltype qualifier issue
compatibility: full

View File

@@ -1,3 +1,11 @@
## 1.5.4
No user-facing changes.
## 1.5.3
No user-facing changes.
## 1.5.2
No user-facing changes.

View File

@@ -85,10 +85,8 @@ module OverflowDestinationConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node source) { none() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
exists(FunctionCall fc | result = fc.getLocation() |
exists(FunctionCall fc | result = [fc.getLocation(), sink.getLocation()] |
sourceSized(fc, sink.asIndirectConvertedExpr())
)
}

View File

@@ -171,12 +171,10 @@ module NonConstFlowConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node source) { none() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
result = sink.getLocation()
or
exists(FormattingFunctionCall call, Expr formatString | result = call.getLocation() |
exists(FormattingFunctionCall call, Expr formatString |
result = [call.getLocation(), sink.getLocation()]
|
isSinkImpl(sink, formatString) and
call.getArgument(call.getFormatParameterIndex()) = formatString
)

View File

@@ -155,7 +155,7 @@ module ExecTaintConfig implements DataFlow::StateConfigSig {
Location getASelectedSinkLocation(DataFlow::Node sink) {
exists(DataFlow::Node concatResult, Expr command, ExecState state |
result = [concatResult.getLocation(), command.getLocation()] and
result = [concatResult.getLocation(), command.getLocation(), sink.getLocation()] and
isSink(sink, state) and
isSinkImpl(sink, command, _) and
concatResult = state.getOutgoingNode()

View File

@@ -58,7 +58,9 @@ module SqlTaintedConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
exists(Expr taintedArg | result = taintedArg.getLocation() | taintedArg = asSinkExpr(sink))
exists(Expr taintedArg | result = [taintedArg.getLocation(), sink.getLocation()] |
taintedArg = asSinkExpr(sink)
)
}
}

View File

@@ -128,7 +128,7 @@ module Config implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
exists(BufferWrite bw | result = bw.getLocation() | isSink(sink, bw, _))
exists(BufferWrite bw | result = [bw.getLocation(), sink.getLocation()] | isSink(sink, bw, _))
}
}

View File

@@ -124,7 +124,8 @@ module UncontrolledArithConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node source) {
result = getExpr(source).getLocation()
isSource(source) and
result = [getExpr(source).getLocation(), source.getLocation()]
}
}

View File

@@ -95,7 +95,7 @@ module TaintedAllocationSizeConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
exists(Expr alloc | result = alloc.getLocation() | allocSink(alloc, sink))
exists(Expr alloc | result = [alloc.getLocation(), sink.getLocation()] | allocSink(alloc, sink))
}
}

View File

@@ -76,7 +76,9 @@ module Config implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
exists(Expr condition | result = condition.getLocation() | isSink(sink, condition))
exists(Expr condition | result = [condition.getLocation(), sink.getLocation()] |
isSink(sink, condition)
)
}
}

View File

@@ -51,7 +51,9 @@ module ToBufferConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
exists(SensitiveBufferWrite w | result = w.getLocation() | isSinkImpl(sink, w))
exists(SensitiveBufferWrite w | result = [w.getLocation(), sink.getLocation()] |
isSinkImpl(sink, w)
)
}
}

View File

@@ -35,11 +35,13 @@ module FromSensitiveConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node sourceNode) {
exists(SensitiveExpr source | result = source.getLocation() | isSourceImpl(sourceNode, source))
exists(SensitiveExpr source | result = [source.getLocation(), sourceNode.getLocation()] |
isSourceImpl(sourceNode, source)
)
}
Location getASelectedSinkLocation(DataFlow::Node sink) {
exists(FileWrite w | result = w.getLocation() | isSinkImpl(sink, w, _))
exists(FileWrite w | result = [w.getLocation(), sink.getLocation()] | isSinkImpl(sink, w, _))
}
}

View File

@@ -249,7 +249,9 @@ module FromSensitiveConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSinkLocation(DataFlow::Node sink) {
exists(NetworkSendRecv networkSendRecv | result = networkSendRecv.getLocation() |
exists(NetworkSendRecv networkSendRecv |
result = [networkSendRecv.getLocation(), sink.getLocation()]
|
isSinkSendRecv(sink, networkSendRecv)
)
}

View File

@@ -127,13 +127,13 @@ module FromSensitiveConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node source) {
exists(SensitiveExpr sensitive | result = sensitive.getLocation() |
exists(SensitiveExpr sensitive | result = [sensitive.getLocation(), source.getLocation()] |
isSourceImpl(source, sensitive)
)
}
Location getASelectedSinkLocation(DataFlow::Node sink) {
exists(SqliteFunctionCall sqliteCall | result = sqliteCall.getLocation() |
exists(SqliteFunctionCall sqliteCall | result = [sqliteCall.getLocation(), sink.getLocation()] |
isSinkImpl(sink, sqliteCall, _)
)
}

View File

@@ -91,10 +91,9 @@ module HttpStringToUrlOpenConfig implements DataFlow::ConfigSig {
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node source) {
result = source.asIndirectExpr().getLocation()
isSource(source) and
result = [source.asIndirectExpr().getLocation(), source.getLocation()]
}
Location getASelectedSinkLocation(DataFlow::Node sink) { none() }
}
module HttpStringToUrlOpen = TaintTracking::Global<HttpStringToUrlOpenConfig>;

View File

@@ -3,11 +3,15 @@
"qhelp.dtd">
<qhelp>
<overview>
<p>Using broken or weak cryptographic algorithms can leave data vulnerable to being decrypted.</p>
<p>Using broken or weak cryptographic algorithms may compromise security guarantees such as confidentiality, integrity, and authenticity.</p>
<p>Many cryptographic algorithms provided by cryptography libraries are known to be weak, or
flawed. Using such an algorithm means that an attacker may be able to easily decrypt the encrypted
data.</p>
<p>Many cryptographic algorithms are known to be weak or flawed. The security guarantees of a system often rely on the underlying cryptography, so using a weak algorithm can have severe consequences. For example:
</p>
<ul>
<li>If a weak encryption algorithm is used, an attacker may be able to decrypt sensitive data.</li>
<li>If a weak hashing algorithm is used to protect data integrity, an attacker may be able to craft a malicious input that has the same hash as a benign one.</li>
<li>If a weak algorithm is used for digital signatures, an attacker may be able to forge signatures and impersonate legitimate users.</li>
</ul>
</overview>
<recommendation>

View File

@@ -0,0 +1,3 @@
## 1.5.3
No user-facing changes.

View File

@@ -0,0 +1,3 @@
## 1.5.4
No user-facing changes.

View File

@@ -1,2 +1,2 @@
---
lastReleaseVersion: 1.5.2
lastReleaseVersion: 1.5.4

View File

@@ -50,8 +50,6 @@ module WordexpTaintConfig implements DataFlow::ConfigSig {
}
predicate observeDiffInformedIncrementalMode() { any() }
Location getASelectedSourceLocation(DataFlow::Node source) { none() }
}
module WordexpTaint = TaintTracking::Global<WordexpTaintConfig>;

View File

@@ -1,5 +1,5 @@
/**
* @name Dangerous use convert function.
* @name Dangerous use convert function
* @description Using convert function with an invalid length argument can result in an out-of-bounds access error or unexpected result.
* @kind problem
* @id cpp/dangerous-use-convert-function

View File

@@ -1,5 +1,5 @@
/**
* @name Dangerous use of transformation after operation.
* @name Dangerous use of transformation after operation
* @description By using the transformation after the operation, you are doing a pointless and dangerous action.
* @kind problem
* @id cpp/dangerous-use-of-transformation-after-operation

Some files were not shown because too many files have changed in this diff Show More