Merge pull request #1016 from dave-bartolomeo/dave/PreciseDefs

C++: SSA flow through fields and imprecise defs
This commit is contained in:
Jonas Jensen
2019-05-03 08:12:13 +02:00
committed by GitHub
23 changed files with 4089 additions and 2467 deletions

View File

@@ -0,0 +1,144 @@
# IR SSA Construction
This document describes how Static Single Assignment (SSA) form is constructed for the Intermediate
Representation (IR). The SSA form that we use is based on the traditional [SSA](https://en.wikipedia.org/wiki/Static_single_assignment_form)
commonly used in compilers, with additional extensions to support accesses to aliased memory
inspired by [ChowCLLS96](https://link.springer.com/chapter/10.1007%2F3-540-61053-7_66).
SSA construction takes as input an instance of the IR, and creates a new instance of the IR that is
in SSA form. If the input IR is already in SSA form, SSA construction will still recompute SSA form
from scratch. However, the input SSA information will be taken into account to improve the alias
analysis that guides the new SSA computation. The current implementation creates three successive
instances of the IR:
- *Raw IR* is constructed directly from the original AST. Raw IR does not have any of its memory
accesses in SSA form.
- *Unaliased SSA IR* is constructed from Raw IR. It places memory accesses in SSA form only for
accesses to unescaped local variables that are loaded or stored in their entirety, and as their
declared type. Accesses to aliased memory are not modeled, nor are accesses to variables that have
any partial reads or writes.
- *Aliased SSA IR* is constructed from Unaliased SSA IR. All memory accesses are placed in SSA form,
including accesses to aliased memory.
Constructing SSA form involves three steps in succession: Alias analysis, the memory model, and
the actual SSA construction itself. Each step is a module that is parameterized on an implementation
of the previous step, so the memory model and alias analysis modules can be replaced in order to
provide different analysis heuristics or performance/precision tradeoffs.
## Alias Analysis
The alias analysis component is responsible for determining two closely related sets of facts about
the input IR: What memory is being accessed by each memory operand or memory result, and which
variables "escape" such that the analysis can no longer precisely track all accesses to those
variables. This information is consumed by the memory model component, but is not consumed directly
by the actual SSA construction.
The current alias analysis exposes two predicates:
```
predicate resultPointsTo(Instruction instr, IRVariable var, IntValue bitOffset);
predicate variableAddressEscapes(IRVariable var);
```
The `resultPointsTo` predicate computes, for each `Instruction`, the `IRVariable` that is pointed
into by the result of that `Instruction`, and the bit offset that the result of the `Instruction`
points to within that variable. If it can not prove that the result points into exactly one
`IRVariable`, then the predicate does not hold. If the result is known to point into a specific
`IRVariable`, but the offset is unknown, then the predicate will hold, but the `bitOffset` parameter
will be `Ints::unknown()`. This is useful for cases including array accesses, where the array index
may be computed at runtime, but it is known that some element in the array, rather than to some
arbitrary unknown location.
The `variableAddressEscapes` predicate computes the set of `IRVariable`s whose address "escapes". A
variable's address escapes if there is a possibility that there exists a memory access somewhere in
the program that access the variable, without that access being modeled by the `resultPointsTo`
predicate. Common reasons for a variable's address escaping include:
- The address is assigned into a global variable, heap memory, or some other location where code may
be able to later dereference the address outside the scope of the `resultPointsTo` analysis.
- The address is passed as an argument to a function, unless the called function is known not to
retain that address after it returns.
### Current Implementation
The current alias analysis implementation can track the pointed-to variable and offset through
copies, pointer arithmetic, and field offset computations. If the input IR is already in SSA form,
even an address assigned to a local variable can be tracked.
## Memory Model
The memory model uses the results of alias analysis to describe the memory location accessed by each
memory operand or memory result in the function. It exposes two classes and three non-member
predicate:
```
class MemoryLocation {
VirtualVariable getVirtualVariable();
}
class VirtualVariable extends MemoryLocation {
}
MemoryLocation getResultMemoryLocation(Instruction instr);
MemoryLocation getOperandMemoryLocation(MemoryOperand operand);
Overlap getOverlap(MemoryLocation def, MemoryLocation use);
```
A `MemoryLocation` represents the set of bits of memory read by a memory operand or written by a
memory result. The `getResultMemoryLocation` predicate returns the `MemoryLocation` written by the
result of the specified `Instruction`, and the `getOperandMemoryLocation` predicate returns the
`MemoryLocation` read by the specified `MemoryOperand`. From the point of view of the SSA
construction module, which consumes the memory model, `MemoryLocation` is essentially opaque. The
memory model can assign `MemoryLocation`s to memory accesses however it wants, as long as the few
basic constraints outlined later in this section are respected.
The `getOverlap` predicate returns the overlap relationship between a definition of location `def`
and a use of the location `use`. The possible overlap relationships are as follows:
- `MustExactlyOverlap` - The set of bits written by the definition is identical to the set of bits
read by the use, *and* the data type of both the definition and the use are the same.
- `MustTotallyOverlap` - Either the set of bits written by the definition is a proper superset of
the bits read by the use, or the set of bits written by the definition is identical to that of the
use, but the data type of the definition differs from that of the use.
- `MayPartiallyOverlap` - Neither of the two relationships above apply, but there may be at least
one bit written by the definition that is read by the use. `MayPartiallyOverlap` is always a sound
result, because it is technically correct even if the actual overlap at runtime is exact, total, or
even no overlap at all.
- (No result) - The definition does not overlap the use at all.
Each `MemoryLocation` is associated with exactly one `VirtualVariable`. A `VirtualVariable`
represents a set of `MemoryLocation`s such that any two `MemoryLocation`s that overlap have the same
`VirtualVariable`. Note that each `VirtualVariable` is itself a `MemoryLocation` that totally
overlaps each of its member `MemoryLocation`s. `VirtualVariable`s are used in SSA construction to
separate the problem of matching uses and definitions by partitioning memory locations into groups
that do not overlap with one another.
### Current Implementation
#### Unaliased SSA
The current memory model used to construct Unaliased SSA models only variables that are unescaped,
and always accessed in their entirety via their declared type. There is one `MemoryLocation` for
each unescaped `IRVariable`, and each `MemoryLocation` is its own `VirtualVariable`. The overlap
relationship is simple: Each `MemoryLocation` exactly overlaps itself, and does not overlap any
other `MemoryLocation`.
#### Aliased SSA
The current memory model used to construct Aliased SSA models every memory access. There are two
kinds of `MemoryLocation`:
- `VariableMemoryLocation` represents an access to a known `IRVariable` with a specific type, at a bit
offset that may or may not be a known constant. `VariableMemoryLocation` represents any access to a
known `IRVariable` even if that variable's address escapes.
- `UnknownMemoryLocation` represents an access where the memory being accessed is not known to be part
of a single specific `IRVariable`.
In addition, there are two different kinds of `VirtualVariable`:
- `VariableVirtualVariable` represents an `IRVariable` whose address does not escape. The
`VariableVirtualVariable` is just the `VariableMemoryLocation` that represents an access to the entire
`IRVariable` with its declared type.
- `UnknownVirtualVariable` represents all memory that is not covered by a `VariableVirtualVariable`.
This includes the `UnknownMemoryLocation`, as well as any `VariableMemoryLocation` whose
`IRVariable`'s address escapes.
The overlap relationship for this model is slightly more complex than that of Unaliased SSA. A
definition of a `VariableMemoryLocation` overlaps a use of another `VariableMemoryLocation` if both
locations have the same `IRVariable` and the offset ranges overlap. The overlap kind is determined
based on the overlap of the offset ranges, and may be any of the three overlaps kinds, or no overlap
at all if the offset ranges are disjoint. A definition of a `VariableMemoryLocation` overlaps a use
of the `UnknownMemoryLocation` (or vice versa) if and only if the address of the `IRVariable`
escapes; this is a `MayPartiallyOverlap` relationship.

View File

@@ -50,11 +50,17 @@ module InstructionSanity {
/**
* Holds if instruction `instr` is missing an expected operand with tag `tag`.
*/
query predicate missingOperand(Instruction instr, OperandTag tag) {
expectsOperand(instr, tag) and
not exists(NonPhiOperand operand |
operand = instr.getAnOperand() and
operand.getOperandTag() = tag
query predicate missingOperand(Instruction instr, string message, IRFunction func, string funcText) {
exists(OperandTag tag |
expectsOperand(instr, tag) and
not exists(NonPhiOperand operand |
operand = instr.getAnOperand() and
operand.getOperandTag() = tag
) and
message = "Instruction '" + instr.getOpcode().toString() + "' is missing an expected operand with tag '" +
tag.toString() + "' in function '$@'." and
func = instr.getEnclosingIRFunction() and
funcText = getIdentityString(func.getFunction())
)
}
@@ -302,7 +308,7 @@ class Instruction extends Construction::TInstruction {
result = type
}
private string getResultTypeString() {
string getResultTypeString() {
exists(string valcat |
valcat = getValueCategoryString(getResultType().toString()) and
if (getResultType() instanceof UnknownType and

View File

@@ -3,14 +3,18 @@ import Instruction
import IRBlock
import cpp
import semmle.code.cpp.ir.implementation.MemoryAccessKind
import semmle.code.cpp.ir.internal.Overlap
private import semmle.code.cpp.ir.internal.OperandTag
private newtype TOperand =
TNonPhiOperand(Instruction useInstr, OperandTag tag, Instruction defInstr) {
defInstr = Construction::getInstructionOperandDefinition(useInstr, tag)
TRegisterOperand(Instruction useInstr, RegisterOperandTag tag, Instruction defInstr) {
defInstr = Construction::getRegisterOperandDefinition(useInstr, tag)
} or
TPhiOperand(PhiInstruction useInstr, Instruction defInstr, IRBlock predecessorBlock) {
defInstr = Construction::getPhiInstructionOperandDefinition(useInstr, predecessorBlock)
TNonPhiMemoryOperand(Instruction useInstr, MemoryOperandTag tag, Instruction defInstr, Overlap overlap) {
defInstr = Construction::getMemoryOperandDefinition(useInstr, tag, overlap)
} or
TPhiOperand(PhiInstruction useInstr, Instruction defInstr, IRBlock predecessorBlock, Overlap overlap) {
defInstr = Construction::getPhiOperandDefinition(useInstr, predecessorBlock, overlap)
}
/**
@@ -43,6 +47,20 @@ class Operand extends TOperand {
none()
}
/**
* Gets the overlap relationship between the operand's definition and its use.
*/
Overlap getDefinitionOverlap() {
none()
}
/**
* Holds if the result of the definition instruction does not exactly overlap this use.
*/
final predicate isDefinitionInexact() {
not getDefinitionOverlap() instanceof MustExactlyOverlap
}
/**
* Gets a prefix to use when dumping the operand in an operand list.
*/
@@ -58,7 +76,19 @@ class Operand extends TOperand {
* For example: `this:r3_5`
*/
final string getDumpString() {
result = getDumpLabel() + getDefinitionInstruction().getResultId()
result = getDumpLabel() + getInexactSpecifier() + getDefinitionInstruction().getResultId()
}
/**
* Gets a string prefix to prepend to the operand's definition ID in an IR dump, specifying whether the operand is
* an exact or inexact use of its definition. For an inexact use, the prefix is "~". For an exact use, the prefix is
* the empty string.
*/
private string getInexactSpecifier() {
if isDefinitionInexact() then
result = "~"
else
result = ""
}
/**
@@ -104,10 +134,8 @@ class Operand extends TOperand {
*/
class MemoryOperand extends Operand {
MemoryOperand() {
exists(MemoryOperandTag tag |
this = TNonPhiOperand(_, tag, _)
) or
this = TPhiOperand(_, _, _)
this = TNonPhiMemoryOperand(_, _, _, _) or
this = TPhiOperand(_, _, _, _)
}
override predicate isGLValue() {
@@ -133,27 +161,17 @@ class MemoryOperand extends Operand {
}
}
/**
* An operand that consumes a register (non-memory) result.
*/
class RegisterOperand extends Operand {
RegisterOperand() {
exists(RegisterOperandTag tag |
this = TNonPhiOperand(_, tag, _)
)
}
}
/**
* An operand that is not an operand of a `PhiInstruction`.
*/
class NonPhiOperand extends Operand, TNonPhiOperand {
class NonPhiOperand extends Operand {
Instruction useInstr;
Instruction defInstr;
OperandTag tag;
NonPhiOperand() {
this = TNonPhiOperand(useInstr, tag, defInstr)
this = TRegisterOperand(useInstr, tag, defInstr) or
this = TNonPhiMemoryOperand(useInstr, tag, defInstr, _)
}
override final Instruction getUseInstruction() {
@@ -177,7 +195,32 @@ class NonPhiOperand extends Operand, TNonPhiOperand {
}
}
class TypedOperand extends NonPhiOperand, MemoryOperand {
/**
* An operand that consumes a register (non-memory) result.
*/
class RegisterOperand extends NonPhiOperand, TRegisterOperand {
override RegisterOperandTag tag;
override final Overlap getDefinitionOverlap() {
// All register results overlap exactly with their uses.
result instanceof MustExactlyOverlap
}
}
class NonPhiMemoryOperand extends NonPhiOperand, MemoryOperand, TNonPhiMemoryOperand {
override MemoryOperandTag tag;
Overlap overlap;
NonPhiMemoryOperand() {
this = TNonPhiMemoryOperand(useInstr, tag, defInstr, overlap)
}
override final Overlap getDefinitionOverlap() {
result = overlap
}
}
class TypedOperand extends NonPhiMemoryOperand {
override TypedOperandTag tag;
override final Type getType() {
@@ -189,7 +232,7 @@ class TypedOperand extends NonPhiOperand, MemoryOperand {
* The address operand of an instruction that loads or stores a value from
* memory (e.g. `Load`, `Store`).
*/
class AddressOperand extends NonPhiOperand, RegisterOperand {
class AddressOperand extends RegisterOperand {
override AddressOperandTag tag;
override string toString() {
@@ -216,7 +259,7 @@ class LoadOperand extends TypedOperand {
/**
* The source value operand of a `Store` instruction.
*/
class StoreValueOperand extends NonPhiOperand, RegisterOperand {
class StoreValueOperand extends RegisterOperand {
override StoreValueOperandTag tag;
override string toString() {
@@ -227,7 +270,7 @@ class StoreValueOperand extends NonPhiOperand, RegisterOperand {
/**
* The sole operand of a unary instruction (e.g. `Convert`, `Negate`, `Copy`).
*/
class UnaryOperand extends NonPhiOperand, RegisterOperand {
class UnaryOperand extends RegisterOperand {
override UnaryOperandTag tag;
override string toString() {
@@ -238,7 +281,7 @@ class UnaryOperand extends NonPhiOperand, RegisterOperand {
/**
* The left operand of a binary instruction (e.g. `Add`, `CompareEQ`).
*/
class LeftOperand extends NonPhiOperand, RegisterOperand {
class LeftOperand extends RegisterOperand {
override LeftOperandTag tag;
override string toString() {
@@ -249,7 +292,7 @@ class LeftOperand extends NonPhiOperand, RegisterOperand {
/**
* The right operand of a binary instruction (e.g. `Add`, `CompareEQ`).
*/
class RightOperand extends NonPhiOperand, RegisterOperand {
class RightOperand extends RegisterOperand {
override RightOperandTag tag;
override string toString() {
@@ -260,7 +303,7 @@ class RightOperand extends NonPhiOperand, RegisterOperand {
/**
* The condition operand of a `ConditionalBranch` or `Switch` instruction.
*/
class ConditionOperand extends NonPhiOperand, RegisterOperand {
class ConditionOperand extends RegisterOperand {
override ConditionOperandTag tag;
override string toString() {
@@ -272,7 +315,7 @@ class ConditionOperand extends NonPhiOperand, RegisterOperand {
* An operand of the special `UnmodeledUse` instruction, representing a value
* whose set of uses is unknown.
*/
class UnmodeledUseOperand extends NonPhiOperand, MemoryOperand {
class UnmodeledUseOperand extends NonPhiMemoryOperand {
override UnmodeledUseOperandTag tag;
override string toString() {
@@ -287,7 +330,7 @@ class UnmodeledUseOperand extends NonPhiOperand, MemoryOperand {
/**
* The operand representing the target function of an `Call` instruction.
*/
class CallTargetOperand extends NonPhiOperand, RegisterOperand {
class CallTargetOperand extends RegisterOperand {
override CallTargetOperandTag tag;
override string toString() {
@@ -300,7 +343,7 @@ class CallTargetOperand extends NonPhiOperand, RegisterOperand {
* positional arguments (represented by `PositionalArgumentOperand`) and the
* implicit `this` argument, if any (represented by `ThisArgumentOperand`).
*/
class ArgumentOperand extends NonPhiOperand, RegisterOperand {
class ArgumentOperand extends RegisterOperand {
override ArgumentOperandTag tag;
}
@@ -383,9 +426,10 @@ class PhiInputOperand extends MemoryOperand, TPhiOperand {
PhiInstruction useInstr;
Instruction defInstr;
IRBlock predecessorBlock;
Overlap overlap;
PhiInputOperand() {
this = TPhiOperand(useInstr, defInstr, predecessorBlock)
this = TPhiOperand(useInstr, defInstr, predecessorBlock, overlap)
}
override string toString() {
@@ -400,6 +444,10 @@ class PhiInputOperand extends MemoryOperand, TPhiOperand {
result = defInstr
}
override final Overlap getDefinitionOverlap() {
result = overlap
}
override final int getDumpSortOrder() {
result = 11 + getPredecessorBlock().getDisplayIndex()
}
@@ -423,10 +471,8 @@ class PhiInputOperand extends MemoryOperand, TPhiOperand {
/**
* The total operand of a Chi node, representing the previous value of the memory.
*/
class ChiTotalOperand extends MemoryOperand {
ChiTotalOperand() {
this = TNonPhiOperand(_, chiTotalOperand(), _)
}
class ChiTotalOperand extends NonPhiMemoryOperand {
override ChiTotalOperandTag tag;
override string toString() {
result = "ChiTotal"
@@ -441,10 +487,8 @@ class ChiTotalOperand extends MemoryOperand {
/**
* The partial operand of a Chi node, representing the value being written to part of the memory.
*/
class ChiPartialOperand extends MemoryOperand {
ChiPartialOperand() {
this = TNonPhiOperand(_, chiPartialOperand(), _)
}
class ChiPartialOperand extends NonPhiMemoryOperand {
override ChiPartialOperandTag tag;
override string toString() {
result = "ChiPartial"

View File

@@ -1,6 +1,7 @@
import cpp
import AliasAnalysis
import semmle.code.cpp.ir.internal.Overlap
private import semmle.code.cpp.Print
private import semmle.code.cpp.ir.implementation.unaliased_ssa.IR
private import semmle.code.cpp.ir.internal.IntegerConstant as Ints
private import semmle.code.cpp.ir.internal.IntegerInterval as Interval
@@ -8,167 +9,82 @@ private import semmle.code.cpp.ir.internal.OperandTag
private class IntValue = Ints::IntValue;
private newtype TVirtualVariable =
TVirtualIRVariable(IRVariable var) {
not variableAddressEscapes(var)
} or
TUnknownVirtualVariable(IRFunction f)
private VirtualIRVariable getVirtualVariable(IRVariable var) {
result.getIRVariable() = var
}
private UnknownVirtualVariable getUnknownVirtualVariable(IRFunction f) {
result.getEnclosingIRFunction() = f
}
class VirtualVariable extends TVirtualVariable {
string toString() {
none()
}
string getUniqueId() {
none()
}
Type getType() {
none()
}
}
/**
* A virtual variable representing a single non-escaped `IRVariable`.
*/
class VirtualIRVariable extends VirtualVariable, TVirtualIRVariable {
IRVariable var;
VirtualIRVariable() {
this = TVirtualIRVariable(var)
}
override final string toString() {
result = var.toString()
}
final IRVariable getIRVariable() {
result = var
}
override final Type getType() {
result = var.getType()
}
override final string getUniqueId() {
result = var.getUniqueId()
}
}
/**
* A virtual variable representing all escaped memory accessible by the function,
* including escaped local variables.
*/
class UnknownVirtualVariable extends VirtualVariable, TUnknownVirtualVariable {
IRFunction f;
UnknownVirtualVariable() {
this = TUnknownVirtualVariable(f)
}
override final string toString() {
result = "UnknownVvar(" + f + ")"
}
override final string getUniqueId() {
result = "UnknownVvar(" + f + ")"
}
override final Type getType() {
result instanceof UnknownType
}
final IRFunction getEnclosingIRFunction() {
result = f
}
}
private predicate hasResultMemoryAccess(Instruction instr, IRVariable var, IntValue startBitOffset,
private predicate hasResultMemoryAccess(Instruction instr, IRVariable var, Type type, IntValue startBitOffset,
IntValue endBitOffset) {
resultPointsTo(instr.getResultAddressOperand().getDefinitionInstruction(), var, startBitOffset) and
type = instr.getResultType() and
if exists(instr.getResultSize()) then
endBitOffset = Ints::add(startBitOffset, Ints::mul(instr.getResultSize(), 8))
else
endBitOffset = Ints::unknown()
}
private predicate hasOperandMemoryAccess(MemoryOperand operand, IRVariable var, IntValue startBitOffset,
private predicate hasOperandMemoryAccess(MemoryOperand operand, IRVariable var, Type type, IntValue startBitOffset,
IntValue endBitOffset) {
resultPointsTo(operand.getAddressOperand().getDefinitionInstruction(), var, startBitOffset) and
type = operand.getType() and
if exists(operand.getSize()) then
endBitOffset = Ints::add(startBitOffset, Ints::mul(operand.getSize(), 8))
else
endBitOffset = Ints::unknown()
}
private newtype TMemoryAccess =
TVariableMemoryAccess(IRVariable var, IntValue startBitOffset, IntValue endBitOffset) {
hasResultMemoryAccess(_, var, startBitOffset, endBitOffset) or
hasOperandMemoryAccess(_, var, startBitOffset, endBitOffset)
private newtype TMemoryLocation =
TVariableMemoryLocation(IRVariable var, Type type, IntValue startBitOffset, IntValue endBitOffset) {
hasResultMemoryAccess(_, var, type, startBitOffset, endBitOffset) or
hasOperandMemoryAccess(_, var, type, startBitOffset, endBitOffset)
}
or
TUnknownMemoryAccess(UnknownVirtualVariable uvv) or
TTotalUnknownMemoryAccess(UnknownVirtualVariable uvv)
TUnknownMemoryLocation(IRFunction irFunc) or
TUnknownVirtualVariable(IRFunction irFunc)
private VariableMemoryAccess getVariableMemoryAccess(IRVariable var, IntValue startBitOffset, IntValue endBitOffset) {
result = TVariableMemoryAccess(var, startBitOffset, endBitOffset)
/**
* Represents the memory location accessed by a memory operand or memory result. In this implementation, the location is
* one of the following:
* - `VariableMemoryLocation` - A location within a known `IRVariable`, at an offset that is either a constant or is
* unknown.
* - `UnknownMemoryLocation` - A location not known to be within a specific `IRVariable`.
*/
abstract class MemoryLocation extends TMemoryLocation {
abstract string toString();
abstract VirtualVariable getVirtualVariable();
abstract Type getType();
abstract string getUniqueId();
}
class MemoryAccess extends TMemoryAccess {
string toString() {
none()
}
VirtualVariable getVirtualVariable() {
none()
}
predicate isPartialMemoryAccess() {
none()
}
abstract class VirtualVariable extends MemoryLocation {
}
/**
* An access to memory within a single known `IRVariable`. The variable may be either an unescaped variable
* (with its own `VirtualIRVariable`) or an escaped variable (assiged to `UnknownVirtualVariable`).
* (with its own `VirtualIRVariable`) or an escaped variable (assigned to `UnknownVirtualVariable`).
*/
class VariableMemoryAccess extends TVariableMemoryAccess, MemoryAccess {
class VariableMemoryLocation extends TVariableMemoryLocation, MemoryLocation {
IRVariable var;
Type type;
IntValue startBitOffset;
IntValue endBitOffset;
VariableMemoryAccess() {
this = TVariableMemoryAccess(var, startBitOffset, endBitOffset)
VariableMemoryLocation() {
this = TVariableMemoryLocation(var, type, startBitOffset, endBitOffset)
}
override final string toString() {
exists(string partialString |
result = var.toString() + Interval::getIntervalString(startBitOffset, endBitOffset) + partialString and
if isPartialMemoryAccess() then
partialString = " (partial)"
else
partialString = ""
)
result = var.toString() + Interval::getIntervalString(startBitOffset, endBitOffset) + "<" + type.toString() + ">"
}
final override VirtualVariable getVirtualVariable() {
result = getVirtualVariable(var) or
not exists(getVirtualVariable(var)) and result = getUnknownVirtualVariable(var.getEnclosingIRFunction())
override final Type getType() {
result = type
}
IntValue getStartBitOffset() {
final IntValue getStartBitOffset() {
result = startBitOffset
}
IntValue getEndBitOffset() {
final IntValue getEndBitOffset() {
result = endBitOffset
}
@@ -176,138 +92,199 @@ class VariableMemoryAccess extends TVariableMemoryAccess, MemoryAccess {
result = var
}
final override predicate isPartialMemoryAccess() {
not exists(getVirtualVariable(var)) or
getStartBitOffset() != 0
or
not Ints::isEQ(getEndBitOffset(), Ints::add(getStartBitOffset(), Ints::mul(var.getType().getSize(), 8)))
override final string getUniqueId() {
result = var.getUniqueId() + Interval::getIntervalString(startBitOffset, endBitOffset) + "<" +
getTypeIdentityString(type) + ">"
}
override final VirtualVariable getVirtualVariable() {
if variableAddressEscapes(var) then
result = TUnknownVirtualVariable(var.getEnclosingIRFunction())
else
result = TVariableMemoryLocation(var, var.getType(), 0, var.getType().getSize() * 8)
}
/**
* Holds if this memory location covers the entire variable.
*/
final predicate coversEntireVariable() {
startBitOffset = 0 and
endBitOffset = var.getType().getSize() * 8
}
}
/**
* Represents the `MemoryLocation` for an `IRVariable` that acts as its own `VirtualVariable`. Includes any
* `VariableMemoryLocation` that exactly overlaps its entire `IRVariable`, and only if that `IRVariable` does not
* escape.
*/
class VariableVirtualVariable extends VariableMemoryLocation, VirtualVariable {
VariableVirtualVariable() {
not variableAddressEscapes(var) and
type = var.getType() and
coversEntireVariable()
}
}
/**
* An access to memory that is not known to be confined to a specific `IRVariable`.
*/
class UnknownMemoryAccess extends TUnknownMemoryAccess, MemoryAccess {
UnknownVirtualVariable vvar;
UnknownMemoryAccess() {
this = TUnknownMemoryAccess(vvar)
class UnknownMemoryLocation extends TUnknownMemoryLocation, MemoryLocation {
IRFunction irFunc;
UnknownMemoryLocation() {
this = TUnknownMemoryLocation(irFunc)
}
final override string toString() {
result = vvar.toString()
override final string toString() {
result = "{Unknown}"
}
final override VirtualVariable getVirtualVariable() {
result = vvar
override final VirtualVariable getVirtualVariable() {
result = TUnknownVirtualVariable(irFunc)
}
final override predicate isPartialMemoryAccess() {
any()
override final Type getType() {
result instanceof UnknownType
}
override final string getUniqueId() {
result = "{Unknown}"
}
}
/**
* An access to all aliased memory.
*/
class TotalUnknownMemoryAccess extends TTotalUnknownMemoryAccess, MemoryAccess {
UnknownVirtualVariable vvar;
TotalUnknownMemoryAccess() {
this = TTotalUnknownMemoryAccess(vvar)
class UnknownVirtualVariable extends TUnknownVirtualVariable, VirtualVariable {
IRFunction irFunc;
UnknownVirtualVariable() {
this = TUnknownVirtualVariable(irFunc)
}
final override string toString() {
result = vvar.toString()
override final string toString() {
result = "{AllAliased}"
}
final override VirtualVariable getVirtualVariable() {
result = vvar
override final Type getType() {
result instanceof UnknownType
}
override final string getUniqueId() {
result = " " + toString()
}
override final VirtualVariable getVirtualVariable() {
result = this
}
}
Overlap getOverlap(MemoryAccess def, MemoryAccess use) {
Overlap getOverlap(MemoryLocation def, MemoryLocation use) {
// The def and the use must have the same virtual variable, or no overlap is possible.
def.getVirtualVariable() = use.getVirtualVariable() and
(
// A TotalUnknownMemoryAccess must totally overlap any access to the same virtual variable.
def instanceof TotalUnknownMemoryAccess and result instanceof MustTotallyOverlap or
// An UnknownMemoryAccess may partially overlap any access to the same virtual variable.
def instanceof UnknownMemoryAccess and result instanceof MayPartiallyOverlap or
exists(VariableMemoryAccess defVariableAccess |
defVariableAccess = def and
// An UnknownVirtualVariable must totally overlap any location within the same virtual variable.
def instanceof UnknownVirtualVariable and result instanceof MustTotallyOverlap or
// An UnknownMemoryLocation may partially overlap any Location within the same virtual variable.
def instanceof UnknownMemoryLocation and result instanceof MayPartiallyOverlap or
exists(VariableMemoryLocation defVariableLocation |
defVariableLocation = def and
(
(
// A VariableMemoryAccess may partially overlap an unknown access to the same virtual variable.
((use instanceof UnknownMemoryAccess) or (use instanceof TotalUnknownMemoryAccess)) and
// A VariableMemoryLocation may partially overlap an unknown location within the same virtual variable.
((use instanceof UnknownMemoryLocation) or (use instanceof UnknownVirtualVariable)) and
result instanceof MayPartiallyOverlap
) or
// A VariableMemoryAccess overlaps another access to the same variable based on the relationship
// A VariableMemoryLocation overlaps another location within the same variable based on the relationship
// of the two offset intervals.
exists(VariableMemoryAccess useVariableAccess, IntValue defStartOffset, IntValue defEndOffset,
IntValue useStartOffset, IntValue useEndOffset |
useVariableAccess = use and
defStartOffset = defVariableAccess.getStartBitOffset() and
defEndOffset = defVariableAccess.getEndBitOffset() and
useStartOffset = useVariableAccess.getStartBitOffset() and
useEndOffset = useVariableAccess.getEndBitOffset() and
result = Interval::getOverlap(defStartOffset, defEndOffset, useStartOffset, useEndOffset)
exists(VariableMemoryLocation useVariableLocation, IntValue defStartOffset, IntValue defEndOffset,
IntValue useStartOffset, IntValue useEndOffset, Overlap intervalOverlap |
useVariableLocation = use and
// The def and use must access the same `IRVariable`.
defVariableLocation.getVariable() = useVariableLocation.getVariable() and
// The def and use intervals must overlap.
defStartOffset = defVariableLocation.getStartBitOffset() and
defEndOffset = defVariableLocation.getEndBitOffset() and
useStartOffset = useVariableLocation.getStartBitOffset() and
useEndOffset = useVariableLocation.getEndBitOffset() and
intervalOverlap = Interval::getOverlap(defStartOffset, defEndOffset, useStartOffset, useEndOffset) and
if intervalOverlap instanceof MustExactlyOverlap then (
if defVariableLocation.getType() = useVariableLocation.getType() then (
// The def and use types match, so it's an exact overlap.
result instanceof MustExactlyOverlap
)
else (
// The def and use types are not the same, so it's just a total overlap.
result instanceof MustTotallyOverlap
)
)
else if defVariableLocation.coversEntireVariable() then (
// The definition covers the entire variable, so assume that it totally overlaps the use, even if the
// interval for the use is unknown or outside the bounds of the variable.
result instanceof MustTotallyOverlap
)
else (
// Just use the overlap relation of the interval.
result = intervalOverlap
)
)
)
)
)
}
MemoryAccess getResultMemoryAccess(Instruction instr) {
MemoryLocation getResultMemoryLocation(Instruction instr) {
exists(MemoryAccessKind kind |
kind = instr.getResultMemoryAccess() and
(
(
kind.usesAddressOperand() and
if hasResultMemoryAccess(instr, _, _, _) then (
exists(IRVariable var, IntValue startBitOffset, IntValue endBitOffset |
hasResultMemoryAccess(instr, var, startBitOffset, endBitOffset) and
result = getVariableMemoryAccess(var, startBitOffset, endBitOffset)
if hasResultMemoryAccess(instr, _, _, _, _) then (
exists(IRVariable var, Type type, IntValue startBitOffset, IntValue endBitOffset |
hasResultMemoryAccess(instr, var, type, startBitOffset, endBitOffset) and
result = TVariableMemoryLocation(var, type, startBitOffset, endBitOffset)
)
)
else (
result = TUnknownMemoryAccess(TUnknownVirtualVariable(instr.getEnclosingIRFunction()))
result = TUnknownMemoryLocation(instr.getEnclosingIRFunction())
)
) or
(
kind instanceof EscapedMemoryAccess and
result = TTotalUnknownMemoryAccess(TUnknownVirtualVariable(instr.getEnclosingIRFunction()))
result = TUnknownVirtualVariable(instr.getEnclosingIRFunction())
) or
(
kind instanceof EscapedMayMemoryAccess and
result = TUnknownMemoryAccess(TUnknownVirtualVariable(instr.getEnclosingIRFunction()))
result = TUnknownMemoryLocation(instr.getEnclosingIRFunction())
)
)
)
}
MemoryAccess getOperandMemoryAccess(MemoryOperand operand) {
MemoryLocation getOperandMemoryLocation(MemoryOperand operand) {
exists(MemoryAccessKind kind |
kind = operand.getMemoryAccess() and
(
(
kind.usesAddressOperand() and
if hasOperandMemoryAccess(operand, _, _, _) then (
exists(IRVariable var, IntValue startBitOffset, IntValue endBitOffset |
hasOperandMemoryAccess(operand, var, startBitOffset, endBitOffset) and
result = getVariableMemoryAccess(var, startBitOffset, endBitOffset)
if hasOperandMemoryAccess(operand, _, _, _, _) then (
exists(IRVariable var, Type type, IntValue startBitOffset, IntValue endBitOffset |
hasOperandMemoryAccess(operand, var, type, startBitOffset, endBitOffset) and
result = TVariableMemoryLocation(var, type, startBitOffset, endBitOffset)
)
)
else (
result = TUnknownMemoryAccess(TUnknownVirtualVariable(operand.getEnclosingIRFunction()))
result = TUnknownMemoryLocation(operand.getEnclosingIRFunction())
)
) or
(
kind instanceof EscapedMemoryAccess and
result = TTotalUnknownMemoryAccess(TUnknownVirtualVariable(operand.getEnclosingIRFunction()))
result = TUnknownVirtualVariable(operand.getEnclosingIRFunction())
) or
(
kind instanceof EscapedMayMemoryAccess and
result = TUnknownMemoryAccess(TUnknownVirtualVariable(operand.getEnclosingIRFunction()))
result = TUnknownMemoryLocation(operand.getEnclosingIRFunction())
)
)
)

View File

@@ -1,6 +1,8 @@
private import SSAConstructionInternal
private import OldIR
private import Alias
private import SSAConstruction
private import DebugSSA
/**
* Property provide that dumps the memory access of each result. Useful for debugging SSA
@@ -8,13 +10,100 @@ private import Alias
*/
class PropertyProvider extends IRPropertyProvider {
override string getInstructionProperty(Instruction instruction, string key) {
(
key = "ResultMemoryAccess" and
result = getResultMemoryAccess(instruction).toString()
exists(MemoryLocation location |
location = getResultMemoryLocation(instruction) and
(
key = "ResultMemoryLocation" and result = location.toString() or
key = "ResultVirtualVariable" and result = location.getVirtualVariable().toString()
)
)
or
exists(MemoryLocation location |
location = getOperandMemoryLocation(instruction.getAnOperand()) and
(
key = "OperandMemoryAccess" and result = location.toString() or
key = "OperandVirtualVariable" and result = location.getVirtualVariable().toString()
)
) or
exists(MemoryLocation useLocation, IRBlock defBlock, int defRank, int defIndex |
hasDefinitionAtRank(useLocation, _, defBlock, defRank, defIndex) and
defBlock.getInstruction(defIndex) = instruction and
key = "DefinitionRank[" + useLocation.toString() + "]" and
result = defRank.toString()
) or
exists(MemoryLocation useLocation, IRBlock useBlock, int useRank |
hasUseAtRank(useLocation, useBlock, useRank, instruction) and
key = "UseRank[" + useLocation.toString() + "]" and
result = useRank.toString()
) or
exists(MemoryLocation useLocation, IRBlock defBlock, int defRank, int defIndex |
hasDefinitionAtRank(useLocation, _, defBlock, defRank, defIndex) and
defBlock.getInstruction(defIndex) = instruction and
key = "DefinitionReachesUse[" + useLocation.toString() + "]" and
result = strictconcat(IRBlock useBlock, int useRank, int useIndex |
exists(Instruction useInstruction |
hasUseAtRank(useLocation, useBlock, useRank, useInstruction) and
useBlock.getInstruction(useIndex) = useInstruction and
definitionReachesUse(useLocation, defBlock, defRank, useBlock, useRank)
) |
useBlock.getDisplayIndex().toString() + "_" + useIndex, ", " order by useBlock.getDisplayIndex(), useIndex
)
)
}
override string getBlockProperty(IRBlock block, string key) {
exists(MemoryLocation useLocation, int defRank, int defIndex |
hasDefinitionAtRank(useLocation, _, block, defRank, defIndex) and
defIndex = -1 and
key = "DefinitionRank(Phi)[" + useLocation.toString() + "]" and
result = defRank.toString()
) or
exists(MemoryLocation useLocation, MemoryLocation defLocation, int defRank, int defIndex |
hasDefinitionAtRank(useLocation, defLocation, block, defRank, defIndex) and
defIndex = -1 and
key = "DefinitionReachesUse(Phi)[" + useLocation.toString() + "]" and
result = strictconcat(IRBlock useBlock, int useRank, int useIndex |
exists(Instruction useInstruction |
hasUseAtRank(useLocation, useBlock, useRank, useInstruction) and
useBlock.getInstruction(useIndex) = useInstruction and
definitionReachesUse(useLocation, block, defRank, useBlock, useRank) and
exists(getOverlap(defLocation, useLocation))
) |
useBlock.getDisplayIndex().toString() + "_" + useIndex, ", " order by useBlock.getDisplayIndex(), useIndex
)
) or
exists(MemoryLocation useLocation, IRBlock predBlock, IRBlock defBlock, int defIndex, Overlap overlap |
hasPhiOperandDefinition(_, useLocation, block, predBlock, defBlock, defIndex, overlap) and
key = "PhiUse[" + useLocation.toString() + " from " + predBlock.getDisplayIndex().toString() + "]" and
result = defBlock.getDisplayIndex().toString() + "_" + defIndex + " (" + overlap.toString() + ")"
) or
(
key = "OperandMemoryAccess" and
result = getOperandMemoryAccess(instruction.getAnOperand().(MemoryOperand)).toString()
key = "LiveOnEntry" and
result = strictconcat(MemoryLocation useLocation |
locationLiveOnEntryToBlock(useLocation, block) |
useLocation.toString(), ", " order by useLocation.toString()
)
) or
(
key = "LiveOnExit" and
result = strictconcat(MemoryLocation useLocation |
locationLiveOnExitFromBlock(useLocation, block) |
useLocation.toString(), ", " order by useLocation.toString()
)
) or
(
key = "DefsLiveOnEntry" and
result = strictconcat(MemoryLocation defLocation |
definitionLiveOnEntryToBlock(defLocation, block) |
defLocation.toString(), ", " order by defLocation.toString()
)
) or
(
key = "DefsLiveOnExit" and
result = strictconcat(MemoryLocation defLocation |
definitionLiveOnExitFromBlock(defLocation, block) |
defLocation.toString(), ", " order by defLocation.toString()
)
)
}
}

View File

@@ -2,6 +2,7 @@ import SSAConstructionInternal
import cpp
private import semmle.code.cpp.ir.implementation.Opcode
private import semmle.code.cpp.ir.internal.OperandTag
private import semmle.code.cpp.ir.internal.Overlap
private import NewIR
private class OldBlock = Reachability::ReachableBlock;
@@ -24,21 +25,6 @@ cached private module Cached {
instr = WrappedInstruction(result)
}
private Instruction getNewInstruction(OldInstruction instr) {
getOldInstruction(result) = instr
}
/**
* Gets the chi node corresponding to `instr` if one is present, or the new `Instruction`
* corresponding to `instr` if there is no `Chi` node.
*/
private Instruction getNewFinalInstruction(OldInstruction instr) {
result = Chi(instr)
or
not exists(Chi(instr)) and
result = getNewInstruction(instr)
}
private IRVariable getNewIRVariable(OldIR::IRVariable var) {
// This is just a type cast. Both classes derive from the same newtype.
result = var
@@ -48,8 +34,8 @@ cached private module Cached {
WrappedInstruction(OldInstruction oldInstruction) {
not oldInstruction instanceof OldIR::PhiInstruction
} or
Phi(OldBlock block, Alias::VirtualVariable vvar) {
hasPhiNode(vvar, block)
Phi(OldBlock block, Alias::MemoryLocation defLocation) {
definitionHasPhiNode(defLocation, block)
} or
Chi(OldInstruction oldInstruction) {
not oldInstruction instanceof OldIR::PhiInstruction and
@@ -73,33 +59,41 @@ cached private module Cached {
}
cached predicate hasModeledMemoryResult(Instruction instruction) {
exists(Alias::getResultMemoryAccess(getOldInstruction(instruction))) or
exists(Alias::getResultMemoryLocation(getOldInstruction(instruction))) or
instruction instanceof PhiInstruction or // Phis always have modeled results
instruction instanceof ChiInstruction // Chis always have modeled results
}
cached Instruction getInstructionOperandDefinition(Instruction instruction, OperandTag tag) {
exists(OldInstruction oldInstruction, OldIR::NonPhiOperand oldOperand |
cached Instruction getRegisterOperandDefinition(Instruction instruction, RegisterOperandTag tag) {
exists(OldInstruction oldInstruction, OldIR::RegisterOperand oldOperand |
oldInstruction = getOldInstruction(instruction) and
oldOperand = oldInstruction.getAnOperand() and
tag = oldOperand.getOperandTag() and
if oldOperand instanceof OldIR::MemoryOperand then (
result = getNewInstruction(oldOperand.getDefinitionInstruction())
)
}
cached Instruction getMemoryOperandDefinition(Instruction instruction, MemoryOperandTag tag, Overlap overlap) {
exists(OldInstruction oldInstruction, OldIR::NonPhiMemoryOperand oldOperand |
oldInstruction = getOldInstruction(instruction) and
oldOperand = oldInstruction.getAnOperand() and
tag = oldOperand.getOperandTag() and
(
(
if exists(Alias::getOperandMemoryAccess(oldOperand)) then (
exists(OldBlock useBlock, int useRank, Alias::VirtualVariable vvar,
OldBlock defBlock, int defRank, int defIndex |
vvar = Alias::getOperandMemoryAccess(oldOperand).getVirtualVariable() and
hasDefinitionAtRank(vvar, defBlock, defRank, defIndex) and
hasUseAtRank(vvar, useBlock, useRank, oldInstruction) and
definitionReachesUse(vvar, defBlock, defRank, useBlock, useRank) and
if defIndex >= 0 then
result = getNewFinalInstruction(defBlock.getInstruction(defIndex))
else
result = Phi(defBlock, vvar)
if exists(Alias::getOperandMemoryLocation(oldOperand)) then (
exists(OldBlock useBlock, int useRank, Alias::MemoryLocation useLocation, Alias::MemoryLocation defLocation,
OldBlock defBlock, int defRank, int defOffset |
useLocation = Alias::getOperandMemoryLocation(oldOperand) and
hasDefinitionAtRank(useLocation, defLocation, defBlock, defRank, defOffset) and
hasUseAtRank(useLocation, useBlock, useRank, oldInstruction) and
definitionReachesUse(useLocation, defBlock, defRank, useBlock, useRank) and
overlap = Alias::getOverlap(defLocation, useLocation) and
result = getDefinitionOrChiInstruction(defBlock, defOffset, defLocation)
)
)
else (
result = instruction.getEnclosingIRFunction().getUnmodeledDefinitionInstruction()
result = instruction.getEnclosingIRFunction().getUnmodeledDefinitionInstruction() and
overlap instanceof MustTotallyOverlap
)
) or
// Connect any definitions that are not being modeled in SSA to the
@@ -108,24 +102,26 @@ cached private module Cached {
instruction instanceof UnmodeledUseInstruction and
tag instanceof UnmodeledUseOperandTag and
oldDefinition = oldOperand.getDefinitionInstruction() and
not exists(Alias::getResultMemoryAccess(oldDefinition)) and
result = getNewInstruction(oldDefinition)
not exists(Alias::getResultMemoryLocation(oldDefinition)) and
result = getNewInstruction(oldDefinition) and
overlap instanceof MustTotallyOverlap
)
)
else
result = getNewInstruction(oldOperand.getDefinitionInstruction())
) or
instruction = Chi(getOldInstruction(result)) and
tag instanceof ChiPartialOperandTag
tag instanceof ChiPartialOperandTag and
overlap instanceof MustExactlyOverlap
or
exists(IRFunction f |
tag instanceof UnmodeledUseOperandTag and
result = f.getUnmodeledDefinitionInstruction() and
instruction = f.getUnmodeledUseInstruction()
instruction = f.getUnmodeledUseInstruction() and
overlap instanceof MustTotallyOverlap
)
or
tag instanceof ChiTotalOperandTag and
result = getChiInstructionTotalOperand(instruction)
result = getChiInstructionTotalOperand(instruction) and
overlap instanceof MustExactlyOverlap
}
cached Type getInstructionOperandType(Instruction instr, TypedOperandTag tag) {
@@ -148,35 +144,26 @@ cached private module Cached {
)
}
cached Instruction getPhiInstructionOperandDefinition(PhiInstruction instr,
IRBlock newPredecessorBlock) {
exists(Alias::VirtualVariable vvar, OldBlock phiBlock,
OldBlock defBlock, int defRank, int defIndex, OldBlock predBlock |
hasPhiNode(vvar, phiBlock) and
predBlock = phiBlock.getAFeasiblePredecessor() and
instr = Phi(phiBlock, vvar) and
cached Instruction getPhiOperandDefinition(PhiInstruction instr,
IRBlock newPredecessorBlock, Overlap overlap) {
exists(Alias::MemoryLocation defLocation, Alias::MemoryLocation useLocation, OldBlock phiBlock, OldBlock predBlock,
OldBlock defBlock, int defOffset |
hasPhiOperandDefinition(defLocation, useLocation, phiBlock, predBlock, defBlock, defOffset, overlap) and
instr = Phi(phiBlock, useLocation) and
newPredecessorBlock = getNewBlock(predBlock) and
hasDefinitionAtRank(vvar, defBlock, defRank, defIndex) and
definitionReachesEndOfBlock(vvar, defBlock, defRank, predBlock) and
if defIndex >= 0 then
result = getNewFinalInstruction(defBlock.getInstruction(defIndex))
else
result = Phi(defBlock, vvar)
result = getDefinitionOrChiInstruction(defBlock, defOffset, defLocation)
)
}
cached Instruction getChiInstructionTotalOperand(ChiInstruction chiInstr) {
exists(Alias::VirtualVariable vvar, OldInstruction oldInstr, OldBlock defBlock,
int defRank, int defIndex, OldBlock useBlock, int useRank |
exists(Alias::VirtualVariable vvar, OldInstruction oldInstr, Alias::MemoryLocation defLocation, OldBlock defBlock,
int defRank, int defOffset, OldBlock useBlock, int useRank |
chiInstr = Chi(oldInstr) and
vvar = Alias::getResultMemoryAccess(oldInstr).getVirtualVariable() and
hasDefinitionAtRank(vvar, defBlock, defRank, defIndex) and
vvar = Alias::getResultMemoryLocation(oldInstr).getVirtualVariable() and
hasDefinitionAtRank(vvar, defLocation, defBlock, defRank, defOffset) and
hasUseAtRank(vvar, useBlock, useRank, oldInstr) and
definitionReachesUse(vvar, defBlock, defRank, useBlock, useRank) and
if defIndex >= 0 then
result = getNewFinalInstruction(defBlock.getInstruction(defIndex))
else
result = Phi(defBlock, vvar)
result = getDefinitionOrChiInstruction(defBlock, defOffset, vvar)
)
}
@@ -274,9 +261,9 @@ cached private module Cached {
isGLValue = false
)
or
exists(Alias::VirtualVariable vvar |
instruction = Phi(_, vvar) and
type = vvar.getType() and
exists(Alias::MemoryLocation location |
instruction = Phi(_, location) and
type = location.getType() and
isGLValue = false
)
or
@@ -372,97 +359,248 @@ cached private module Cached {
result = getNewInstruction(oldInstruction)
)
}
}
private predicate ssa_variableUpdate(Alias::VirtualVariable vvar,
OldBlock block, int index, OldInstruction instr) {
block.getInstruction(index) = instr and
Alias::getResultMemoryAccess(instr).getVirtualVariable() = vvar
}
private Instruction getNewInstruction(OldInstruction instr) {
getOldInstruction(result) = instr
}
private predicate hasDefinition(Alias::VirtualVariable vvar, OldBlock block, int index) {
(
hasPhiNode(vvar, block) and
index = -1
) or
exists(Alias::MemoryAccess access, OldInstruction def |
access = Alias::getResultMemoryAccess(def) and
block.getInstruction(index) = def and
vvar = access.getVirtualVariable()
/**
* Holds if instruction `def` needs to have a `Chi` instruction inserted after it, to account for a partial definition
* of a virtual variable. The `Chi` instruction provides a definition of the entire virtual variable of which the
* original definition location is a member.
*/
private predicate hasChiNode(Alias::VirtualVariable vvar, OldInstruction def) {
exists(Alias::MemoryLocation defLocation |
defLocation = Alias::getResultMemoryLocation(def) and
defLocation.getVirtualVariable() = vvar and
// If the definition totally (or exactly) overlaps the virtual variable, then there's no need for a `Chi`
// instruction.
Alias::getOverlap(defLocation, vvar) instanceof MayPartiallyOverlap
)
}
private import PhiInsertion
/**
* Module to handle insertion of `Phi` instructions at the correct blocks. We insert a `Phi` instruction at the
* beginning of a block for a given location when that block is on the dominance frontier of a definition of the
* location and there is a use of that location reachable from that block without an intervening definition of the
* location.
* Within the approach outlined above, we treat a location slightly differently depending on whether or not it is a
* virtual variable. For a virtual variable, we will insert a `Phi` instruction on the dominance frontier if there is
* a use of any member location of that virtual variable that is reachable from the `Phi` instruction. For a location
* that is not a virtual variable, we insert a `Phi` instruction only if there is an exactly-overlapping use of the
* location reachable from the `Phi` instruction. This ensures that we insert a `Phi` instruction for a non-virtual
* variable only if doing so would allow dataflow analysis to get a more precise result than if we just used a `Phi`
* instruction for the virtual variable as a whole.
*/
private module PhiInsertion {
/**
* Holds if a `Phi` instruction needs to be inserted for location `defLocation` at the beginning of block `phiBlock`.
*/
predicate definitionHasPhiNode(Alias::MemoryLocation defLocation, OldBlock phiBlock) {
exists(OldBlock defBlock |
phiBlock = Dominance::getDominanceFrontier(defBlock) and
definitionHasDefinitionInBlock(defLocation, defBlock) and
/* We can also eliminate those nodes where the definition is not live on any incoming edge */
definitionLiveOnEntryToBlock(defLocation, phiBlock)
)
}
private predicate defUseRank(Alias::VirtualVariable vvar, OldBlock block, int rankIndex, int index) {
index = rank[rankIndex](int j | hasDefinition(vvar, block, j) or hasUse(vvar, block, j, _))
}
private predicate hasUse(Alias::VirtualVariable vvar, OldBlock block, int index,
OldInstruction use) {
exists(Alias::MemoryAccess access |
/**
* Holds if the memory location `defLocation` has a definition in block `block`, either because of an existing
* instruction, a `Phi` node, or a `Chi` node.
*/
private predicate definitionHasDefinitionInBlock(Alias::MemoryLocation defLocation, OldBlock block) {
definitionHasPhiNode(defLocation, block) or
exists(OldInstruction def, Alias::MemoryLocation resultLocation |
def.getBlock() = block and
resultLocation = Alias::getResultMemoryLocation(def) and
(
access = Alias::getOperandMemoryAccess(use.getAnOperand())
or
/*
* a partial write to a virtual variable is going to generate a use of that variable when
* Chi nodes are inserted, so we need to mark it as a use in the old IR
*/
access = Alias::getResultMemoryAccess(use) and
access.isPartialMemoryAccess()
) and
block.getInstruction(index) = use and
vvar = access.getVirtualVariable()
defLocation = resultLocation or
// For a virtual variable, any definition of a member location will either generate a `Chi` node that defines
// the virtual variable, or will totally overlap the virtual variable. Either way, treat this as a definition of
// the virtual variable.
defLocation = resultLocation.getVirtualVariable()
)
)
}
private predicate variableLiveOnEntryToBlock(Alias::VirtualVariable vvar, OldBlock block) {
/**
* Holds if there is a use at (`block`, `index`) that could consume the result of a `Phi` instruction for
* `defLocation`.
*/
private predicate definitionHasUse(Alias::MemoryLocation defLocation, OldBlock block, int index) {
exists(OldInstruction use |
block.getInstruction(index) = use and
if defLocation instanceof Alias::VirtualVariable then (
exists(Alias::MemoryLocation useLocation |
// For a virtual variable, any use of a location that is a member of the virtual variable counts as a use.
useLocation = Alias::getOperandMemoryLocation(use.getAnOperand()) and
defLocation = useLocation.getVirtualVariable()
) or
// A `Chi` instruction consumes the enclosing virtual variable of its use location.
hasChiNode(defLocation, use)
)
else (
// For other locations, only an exactly-overlapping use of the same location counts as a use.
defLocation = Alias::getOperandMemoryLocation(use.getAnOperand()) and
Alias::getOverlap(defLocation, defLocation) instanceof MustExactlyOverlap
)
)
}
/**
* Holds if the location `defLocation` is redefined at (`block`, `index`). A location is considered "redefined" if
* there is a definition that would prevent a previous definition of `defLocation` from being consumed as the operand
* of a `Phi` node that occurs after the redefinition.
*/
private predicate definitionHasRedefinition(Alias::MemoryLocation defLocation, OldBlock block, int index) {
exists(OldInstruction redef, Alias::MemoryLocation redefLocation |
block.getInstruction(index) = redef and
redefLocation = Alias::getResultMemoryLocation(redef) and
if defLocation instanceof Alias::VirtualVariable then (
// For a virtual variable, the definition may be consumed by any use of a location that is a member of the
// virtual variable. Thus, the definition is live until a subsequent redefinition of the entire virtual
// variable.
exists(Overlap overlap |
overlap = Alias::getOverlap(redefLocation, defLocation) and
not overlap instanceof MayPartiallyOverlap
)
)
else (
// For other locations, the definition may only be consumed by an exactly-overlapping use of the same location.
// Thus, the definition is live until a subsequent definition of any location that may overlap the original
// definition location.
exists(Alias::getOverlap(redefLocation, defLocation))
)
)
}
/**
* Holds if the definition `defLocation` is live on entry to block `block`. The definition is live if there is at
* least one use of that definition before any intervening instruction that redefines the definition location.
*/
predicate definitionLiveOnEntryToBlock(Alias::MemoryLocation defLocation, OldBlock block) {
exists(int firstAccess |
hasUse(vvar, block, firstAccess, _) and
definitionHasUse(defLocation, block, firstAccess) and
firstAccess = min(int index |
hasUse(vvar, block, index, _)
definitionHasUse(defLocation, block, index)
or
ssa_variableUpdate(vvar, block, index, _)
definitionHasRedefinition(defLocation, block, index)
)
)
or
(variableLiveOnExitFromBlock(vvar, block) and not ssa_variableUpdate(vvar, block, _, _))
}
pragma[noinline]
private predicate variableLiveOnExitFromBlock(Alias::VirtualVariable vvar, OldBlock block) {
variableLiveOnEntryToBlock(vvar, block.getAFeasibleSuccessor())
(definitionLiveOnExitFromBlock(defLocation, block) and not definitionHasRedefinition(defLocation, block, _))
}
/**
* Gets the rank index of a hyphothetical use one instruction past the end of
* the block. This index can be used to determine if a definition reaches the
* end of the block, even if the definition is the last instruction in the
* block.
* Holds if the definition `defLocation` is live on exit from block `block`. The definition is live on exit if it is
* live on entry to any of the successors of `block`.
*/
private int exitRank(Alias::VirtualVariable vvar, OldBlock block) {
result = max(int rankIndex | defUseRank(vvar, block, rankIndex, _)) + 1
pragma[noinline]
predicate definitionLiveOnExitFromBlock(Alias::MemoryLocation defLocation, OldBlock block) {
definitionLiveOnEntryToBlock(defLocation, block.getAFeasibleSuccessor())
}
}
private predicate hasDefinitionAtRank(Alias::VirtualVariable vvar, OldBlock block, int rankIndex,
int instructionIndex) {
hasDefinition(vvar, block, instructionIndex) and
defUseRank(vvar, block, rankIndex, instructionIndex)
}
private import DefUse
private predicate hasUseAtRank(Alias::VirtualVariable vvar, OldBlock block, int rankIndex,
OldInstruction use) {
exists(int index |
hasUse(vvar, block, index, use) and
defUseRank(vvar, block, rankIndex, index)
/**
* Module containing the predicates that connect uses to their reaching definition. The reaching definitions are
* computed separately for each unique use `MemoryLocation`. An instruction is treated as a definition of a use location
* if the defined location overlaps the use location in any way. Thus, a single instruction may serve as a definition
* for multiple use locations, since a single definition location may overlap many use locations.
*
* Definitions and uses are identified by a block and an integer "offset". An offset of -1 indicates the definition
* from a `Phi` instruction at the beginning of the block. An offset of 2*i indicates a definition or use on the
* instruction at index `i` in the block. An offset of 2*i+1 indicates a definition or use on the `Chi` instruction that
* will be inserted immediately after the instruction at index `i` in the block.
*
* For a given use location, each definition and use is also assigned a "rank" within its block. The rank is simply the
* one-based index of that definition or use within the list of definitions and uses of that location within the block,
* ordered by offset. The rank allows the various reachability predicates to be computed more efficiently than they
* would if based solely on offset, since the set of possible ranks is dense while the set of possible offsets is
* potentially very sparse.
*/
module DefUse {
/**
* Gets the `Instruction` for the definition at offset `defOffset` in block `defBlock`.
*/
pragma[inline]
bindingset[defOffset, defLocation]
Instruction getDefinitionOrChiInstruction(OldBlock defBlock, int defOffset,
Alias::MemoryLocation defLocation) {
(
defOffset >= 0 and
exists(OldInstruction oldInstr |
oldInstr = defBlock.getInstruction(defOffset / 2) and
if (defOffset % 2) > 0 then (
// An odd offset corresponds to the `Chi` instruction.
result = Chi(oldInstr)
)
else (
// An even offset corresponds to the original instruction.
result = getNewInstruction(oldInstr)
)
)
) or
(
defOffset < 0 and
result = Phi(defBlock, defLocation)
)
}
/**
* Holds if the definition of `vvar` at `(block, defRank)` reaches the rank
* Gets the rank index of a hyphothetical use one instruction past the end of
* the block. This index can be used to determine if a definition reaches the
* end of the block, even if the definition is the last instruction in the
* block.
*/
private int exitRank(Alias::MemoryLocation useLocation, OldBlock block) {
result = max(int rankIndex | defUseRank(useLocation, block, rankIndex, _)) + 1
}
/**
* Holds if a definition that overlaps `useLocation` at (`defBlock`, `defRank`) reaches the use of `useLocation` at
* (`useBlock`, `useRank`) without any intervening definitions that overlap `useLocation`, where `defBlock` and
* `useBlock` are the same block.
*/
private predicate definitionReachesUseWithinBlock(Alias::MemoryLocation useLocation, OldBlock defBlock,
int defRank, OldBlock useBlock, int useRank) {
defBlock = useBlock and
hasDefinitionAtRank(useLocation, _, defBlock, defRank, _) and
hasUseAtRank(useLocation, useBlock, useRank, _) and
definitionReachesRank(useLocation, defBlock, defRank, useRank)
}
/**
* Holds if a definition that overlaps `useLocation` at (`defBlock`, `defRank`) reaches the use of `useLocation` at
* (`useBlock`, `useRank`) without any intervening definitions that overlap `useLocation`.
*/
predicate definitionReachesUse(Alias::MemoryLocation useLocation, OldBlock defBlock,
int defRank, OldBlock useBlock, int useRank) {
hasUseAtRank(useLocation, useBlock, useRank, _) and
(
definitionReachesUseWithinBlock(useLocation, defBlock, defRank, useBlock,
useRank) or
(
definitionReachesEndOfBlock(useLocation, defBlock, defRank,
useBlock.getAFeasiblePredecessor()) and
not definitionReachesUseWithinBlock(useLocation, useBlock, _, useBlock, useRank)
)
)
}
/**
* Holds if the definition that overlaps `useLocation` at `(block, defRank)` reaches the rank
* index `reachesRank` in block `block`.
*/
private predicate definitionReachesRank(Alias::VirtualVariable vvar, OldBlock block, int defRank,
private predicate definitionReachesRank(Alias::MemoryLocation useLocation, OldBlock block, int defRank,
int reachesRank) {
hasDefinitionAtRank(vvar, block, defRank, _) and
reachesRank <= exitRank(vvar, block) and // Without this, the predicate would be infinite.
hasDefinitionAtRank(useLocation, _, block, defRank, _) and
reachesRank <= exitRank(useLocation, block) and // Without this, the predicate would be infinite.
(
// The def always reaches the next use, even if there is also a def on the
// use instruction.
@@ -470,87 +608,178 @@ cached private module Cached {
(
// If the def reached the previous rank, it also reaches the current rank,
// unless there was another def at the previous rank.
definitionReachesRank(vvar, block, defRank, reachesRank - 1) and
not hasDefinitionAtRank(vvar, block, reachesRank - 1, _)
definitionReachesRank(useLocation, block, defRank, reachesRank - 1) and
not hasDefinitionAtRank(useLocation, _, block, reachesRank - 1, _)
)
)
}
/**
* Holds if the definition of `vvar` at `(defBlock, defRank)` reaches the end of
* block `block`.
*/
private predicate definitionReachesEndOfBlock(Alias::VirtualVariable vvar, OldBlock defBlock,
* Holds if the definition that overlaps `useLocation` at `(defBlock, defRank)` reaches the end of
* block `block` without any intervening definitions that overlap `useLocation`.
*/
predicate definitionReachesEndOfBlock(Alias::MemoryLocation useLocation, OldBlock defBlock,
int defRank, OldBlock block) {
hasDefinitionAtRank(vvar, defBlock, defRank, _) and
hasDefinitionAtRank(useLocation, _, defBlock, defRank, _) and
(
(
// If we're looking at the def's own block, just see if it reaches the exit
// rank of the block.
block = defBlock and
variableLiveOnExitFromBlock(vvar, defBlock) and
definitionReachesRank(vvar, defBlock, defRank, exitRank(vvar, defBlock))
locationLiveOnExitFromBlock(useLocation, defBlock) and
definitionReachesRank(useLocation, defBlock, defRank, exitRank(useLocation, defBlock))
) or
exists(OldBlock idom |
definitionReachesEndOfBlock(vvar, defBlock, defRank, idom) and
noDefinitionsSinceIDominator(vvar, idom, block)
definitionReachesEndOfBlock(useLocation, defBlock, defRank, idom) and
noDefinitionsSinceIDominator(useLocation, idom, block)
)
)
}
pragma[noinline]
private predicate noDefinitionsSinceIDominator(Alias::VirtualVariable vvar, OldBlock idom,
private predicate noDefinitionsSinceIDominator(Alias::MemoryLocation useLocation, OldBlock idom,
OldBlock block) {
Dominance::blockImmediatelyDominates(idom, block) and // It is sufficient to traverse the dominator graph, cf. discussion above.
variableLiveOnExitFromBlock(vvar, block) and
not hasDefinition(vvar, block, _)
locationLiveOnExitFromBlock(useLocation, block) and
not hasDefinition(useLocation, _, block, _)
}
private predicate definitionReachesUseWithinBlock(Alias::VirtualVariable vvar, OldBlock defBlock,
int defRank, OldBlock useBlock, int useRank) {
defBlock = useBlock and
hasDefinitionAtRank(vvar, defBlock, defRank, _) and
hasUseAtRank(vvar, useBlock, useRank, _) and
definitionReachesRank(vvar, defBlock, defRank, useRank)
/**
* Holds if the specified `useLocation` is live on entry to `block`. This holds if there is a use of `useLocation`
* that is reachable from the start of `block` without passing through a definition that overlaps `useLocation`.
* Note that even a partially-overlapping definition blocks liveness, because such a definition will insert a `Chi`
* instruction whose result totally overlaps the location.
*/
predicate locationLiveOnEntryToBlock(Alias::MemoryLocation useLocation, OldBlock block) {
definitionHasPhiNode(useLocation, block) or
exists(int firstAccess |
hasUse(useLocation, block, firstAccess, _) and
firstAccess = min(int offset |
hasUse(useLocation, block, offset, _)
or
hasNonPhiDefinition(useLocation, _, block, offset)
)
) or
(locationLiveOnExitFromBlock(useLocation, block) and not hasNonPhiDefinition(useLocation, _, block, _))
}
private predicate definitionReachesUse(Alias::VirtualVariable vvar, OldBlock defBlock,
int defRank, OldBlock useBlock, int useRank) {
hasUseAtRank(vvar, useBlock, useRank, _) and
/**
* Holds if the specified `useLocation` is live on exit from `block`.
*/
pragma[noinline]
predicate locationLiveOnExitFromBlock(Alias::MemoryLocation useLocation, OldBlock block) {
locationLiveOnEntryToBlock(useLocation, block.getAFeasibleSuccessor())
}
/**
* Holds if there is a definition at offset `offset` in block `block` that overlaps memory location `useLocation`.
* This predicate does not include definitions for Phi nodes.
*/
private predicate hasNonPhiDefinition(Alias::MemoryLocation useLocation, Alias::MemoryLocation defLocation,
OldBlock block, int offset) {
exists(OldInstruction def, Overlap overlap, int index |
defLocation = Alias::getResultMemoryLocation(def) and
block.getInstruction(index) = def and
overlap = Alias::getOverlap(defLocation, useLocation) and
if overlap instanceof MayPartiallyOverlap then
offset = (index * 2) + 1 // The use will be connected to the definition on the `Chi` instruction.
else
offset = index * 2 // The use will be connected to the definition on the original instruction.
)
}
/**
* Holds if there is a definition at offset `offset` in block `block` that overlaps memory location `useLocation`.
* This predicate includes definitions for Phi nodes (at offset -1).
*/
private predicate hasDefinition(Alias::MemoryLocation useLocation, Alias::MemoryLocation defLocation, OldBlock block,
int offset) {
(
definitionReachesUseWithinBlock(vvar, defBlock, defRank, useBlock,
useRank) or
// If there is a Phi node for the use location itself, treat that as a definition at offset -1.
offset = -1 and
if definitionHasPhiNode(useLocation, block) then (
defLocation = useLocation
)
else (
definitionHasPhiNode(defLocation, block) and
defLocation = useLocation.getVirtualVariable()
)
) or
hasNonPhiDefinition(useLocation, defLocation, block, offset)
}
/**
* Holds if there is a definition at offset `offset` in block `block` that overlaps memory location `useLocation`.
* `rankIndex` is the rank of the definition as computed by `defUseRank()`.
*/
predicate hasDefinitionAtRank(Alias::MemoryLocation useLocation, Alias::MemoryLocation defLocation,
OldBlock block, int rankIndex, int offset) {
hasDefinition(useLocation, defLocation, block, offset) and
defUseRank(useLocation, block, rankIndex, offset)
}
/**
* Holds if there is a use of `useLocation` on instruction `use` at offset `offset` in block `block`.
*/
private predicate hasUse(Alias::MemoryLocation useLocation, OldBlock block, int offset, OldInstruction use) {
exists(int index |
block.getInstruction(index) = use and
(
definitionReachesEndOfBlock(vvar, defBlock, defRank,
useBlock.getAFeasiblePredecessor()) and
not definitionReachesUseWithinBlock(vvar, useBlock, _, useBlock, useRank)
// A direct use of the location.
useLocation = Alias::getOperandMemoryLocation(use.getAnOperand()) and offset = index * 2 or
// A `Chi` instruction will include a use of the virtual variable.
hasChiNode(useLocation, use) and offset = (index * 2) + 1
)
)
}
private predicate hasFrontierPhiNode(Alias::VirtualVariable vvar, OldBlock phiBlock) {
exists(OldBlock defBlock |
phiBlock = Dominance::getDominanceFrontier(defBlock) and
hasDefinition(vvar, defBlock, _) and
/* We can also eliminate those nodes where the variable is not live on any incoming edge */
variableLiveOnEntryToBlock(vvar, phiBlock)
/**
* Holds if there is a use of memory location `useLocation` on instruction `use` in block `block`. `rankIndex` is the
* rank of the use use as computed by `defUseRank`.
*/
predicate hasUseAtRank(Alias::MemoryLocation useLocation, OldBlock block, int rankIndex, OldInstruction use) {
exists(int offset |
hasUse(useLocation, block, offset, use) and
defUseRank(useLocation, block, rankIndex, offset)
)
}
private predicate hasPhiNode(Alias::VirtualVariable vvar, OldBlock phiBlock) {
hasFrontierPhiNode(vvar, phiBlock)
//or ssa_sanitized_custom_phi_node(vvar, block)
/**
* Holds if there is a definition at offset `offset` in block `block` that overlaps memory location `useLocation`, or
* a use of `useLocation` at offset `offset` in block `block`. `rankIndex` is the sequence number of the definition
* or use within `block`, counting only uses of `useLocation` and definitions that overlap `useLocation`.
*/
private predicate defUseRank(Alias::MemoryLocation useLocation, OldBlock block, int rankIndex, int offset) {
offset = rank[rankIndex](int j | hasDefinition(useLocation, _, block, j) or hasUse(useLocation, block, j, _))
}
private predicate hasChiNode(Alias::VirtualVariable vvar, OldInstruction def) {
exists(Alias::MemoryAccess ma |
ma = Alias::getResultMemoryAccess(def) and
ma.isPartialMemoryAccess() and
ma.getVirtualVariable() = vvar
/**
* Holds if the `Phi` instruction for location `useLocation` at the beginning of block `phiBlock` has an operand along
* the incoming edge from `predBlock`, where that operand's definition is at offset `defOffset` in block `defBlock`,
* and overlaps the use operand with overlap relationship `overlap`.
*/
pragma[inline]
predicate hasPhiOperandDefinition(Alias::MemoryLocation defLocation, Alias::MemoryLocation useLocation,
OldBlock phiBlock, OldBlock predBlock, OldBlock defBlock, int defOffset, Overlap overlap) {
exists(int defRank |
definitionHasPhiNode(useLocation, phiBlock) and
predBlock = phiBlock.getAFeasiblePredecessor() and
hasDefinitionAtRank(useLocation, defLocation, defBlock, defRank, defOffset) and
definitionReachesEndOfBlock(useLocation, defBlock, defRank, predBlock) and
overlap = Alias::getOverlap(defLocation, useLocation)
)
}
}
/**
* Expose some of the internal predicates to PrintSSA.qll. We do this by publically importing those modules in the
* `DebugSSA` module, which is then imported by PrintSSA.
*/
module DebugSSA {
import PhiInsertion
import DefUse
}
import CachedForDebugging
cached private module CachedForDebugging {
cached string getTempVariableUniqueId(IRTempVariable var) {
@@ -562,9 +791,16 @@ cached private module CachedForDebugging {
oldInstr = getOldInstruction(instr) and
result = "NonSSA: " + oldInstr.getUniqueId()
) or
exists(Alias::VirtualVariable vvar, OldBlock phiBlock |
instr = Phi(phiBlock, vvar) and
result = "Phi Block(" + phiBlock.getUniqueId() + "): " + vvar.getUniqueId()
exists(Alias::MemoryLocation location, OldBlock phiBlock, string specificity |
instr = Phi(phiBlock, location) and
result = "Phi Block(" + phiBlock.getUniqueId() + ")[" + specificity + "]: " + location.getUniqueId() and
if location instanceof Alias::VirtualVariable then (
// Sort Phi nodes for virtual variables before Phi nodes for member locations.
specificity = "g"
)
else (
specificity = "s"
)
) or
(
instr = Unreached(_) and

View File

@@ -50,11 +50,17 @@ module InstructionSanity {
/**
* Holds if instruction `instr` is missing an expected operand with tag `tag`.
*/
query predicate missingOperand(Instruction instr, OperandTag tag) {
expectsOperand(instr, tag) and
not exists(NonPhiOperand operand |
operand = instr.getAnOperand() and
operand.getOperandTag() = tag
query predicate missingOperand(Instruction instr, string message, IRFunction func, string funcText) {
exists(OperandTag tag |
expectsOperand(instr, tag) and
not exists(NonPhiOperand operand |
operand = instr.getAnOperand() and
operand.getOperandTag() = tag
) and
message = "Instruction '" + instr.getOpcode().toString() + "' is missing an expected operand with tag '" +
tag.toString() + "' in function '$@'." and
func = instr.getEnclosingIRFunction() and
funcText = getIdentityString(func.getFunction())
)
}
@@ -302,7 +308,7 @@ class Instruction extends Construction::TInstruction {
result = type
}
private string getResultTypeString() {
string getResultTypeString() {
exists(string valcat |
valcat = getValueCategoryString(getResultType().toString()) and
if (getResultType() instanceof UnknownType and

View File

@@ -3,14 +3,18 @@ import Instruction
import IRBlock
import cpp
import semmle.code.cpp.ir.implementation.MemoryAccessKind
import semmle.code.cpp.ir.internal.Overlap
private import semmle.code.cpp.ir.internal.OperandTag
private newtype TOperand =
TNonPhiOperand(Instruction useInstr, OperandTag tag, Instruction defInstr) {
defInstr = Construction::getInstructionOperandDefinition(useInstr, tag)
TRegisterOperand(Instruction useInstr, RegisterOperandTag tag, Instruction defInstr) {
defInstr = Construction::getRegisterOperandDefinition(useInstr, tag)
} or
TPhiOperand(PhiInstruction useInstr, Instruction defInstr, IRBlock predecessorBlock) {
defInstr = Construction::getPhiInstructionOperandDefinition(useInstr, predecessorBlock)
TNonPhiMemoryOperand(Instruction useInstr, MemoryOperandTag tag, Instruction defInstr, Overlap overlap) {
defInstr = Construction::getMemoryOperandDefinition(useInstr, tag, overlap)
} or
TPhiOperand(PhiInstruction useInstr, Instruction defInstr, IRBlock predecessorBlock, Overlap overlap) {
defInstr = Construction::getPhiOperandDefinition(useInstr, predecessorBlock, overlap)
}
/**
@@ -43,6 +47,20 @@ class Operand extends TOperand {
none()
}
/**
* Gets the overlap relationship between the operand's definition and its use.
*/
Overlap getDefinitionOverlap() {
none()
}
/**
* Holds if the result of the definition instruction does not exactly overlap this use.
*/
final predicate isDefinitionInexact() {
not getDefinitionOverlap() instanceof MustExactlyOverlap
}
/**
* Gets a prefix to use when dumping the operand in an operand list.
*/
@@ -58,7 +76,19 @@ class Operand extends TOperand {
* For example: `this:r3_5`
*/
final string getDumpString() {
result = getDumpLabel() + getDefinitionInstruction().getResultId()
result = getDumpLabel() + getInexactSpecifier() + getDefinitionInstruction().getResultId()
}
/**
* Gets a string prefix to prepend to the operand's definition ID in an IR dump, specifying whether the operand is
* an exact or inexact use of its definition. For an inexact use, the prefix is "~". For an exact use, the prefix is
* the empty string.
*/
private string getInexactSpecifier() {
if isDefinitionInexact() then
result = "~"
else
result = ""
}
/**
@@ -104,10 +134,8 @@ class Operand extends TOperand {
*/
class MemoryOperand extends Operand {
MemoryOperand() {
exists(MemoryOperandTag tag |
this = TNonPhiOperand(_, tag, _)
) or
this = TPhiOperand(_, _, _)
this = TNonPhiMemoryOperand(_, _, _, _) or
this = TPhiOperand(_, _, _, _)
}
override predicate isGLValue() {
@@ -133,27 +161,17 @@ class MemoryOperand extends Operand {
}
}
/**
* An operand that consumes a register (non-memory) result.
*/
class RegisterOperand extends Operand {
RegisterOperand() {
exists(RegisterOperandTag tag |
this = TNonPhiOperand(_, tag, _)
)
}
}
/**
* An operand that is not an operand of a `PhiInstruction`.
*/
class NonPhiOperand extends Operand, TNonPhiOperand {
class NonPhiOperand extends Operand {
Instruction useInstr;
Instruction defInstr;
OperandTag tag;
NonPhiOperand() {
this = TNonPhiOperand(useInstr, tag, defInstr)
this = TRegisterOperand(useInstr, tag, defInstr) or
this = TNonPhiMemoryOperand(useInstr, tag, defInstr, _)
}
override final Instruction getUseInstruction() {
@@ -177,7 +195,32 @@ class NonPhiOperand extends Operand, TNonPhiOperand {
}
}
class TypedOperand extends NonPhiOperand, MemoryOperand {
/**
* An operand that consumes a register (non-memory) result.
*/
class RegisterOperand extends NonPhiOperand, TRegisterOperand {
override RegisterOperandTag tag;
override final Overlap getDefinitionOverlap() {
// All register results overlap exactly with their uses.
result instanceof MustExactlyOverlap
}
}
class NonPhiMemoryOperand extends NonPhiOperand, MemoryOperand, TNonPhiMemoryOperand {
override MemoryOperandTag tag;
Overlap overlap;
NonPhiMemoryOperand() {
this = TNonPhiMemoryOperand(useInstr, tag, defInstr, overlap)
}
override final Overlap getDefinitionOverlap() {
result = overlap
}
}
class TypedOperand extends NonPhiMemoryOperand {
override TypedOperandTag tag;
override final Type getType() {
@@ -189,7 +232,7 @@ class TypedOperand extends NonPhiOperand, MemoryOperand {
* The address operand of an instruction that loads or stores a value from
* memory (e.g. `Load`, `Store`).
*/
class AddressOperand extends NonPhiOperand, RegisterOperand {
class AddressOperand extends RegisterOperand {
override AddressOperandTag tag;
override string toString() {
@@ -216,7 +259,7 @@ class LoadOperand extends TypedOperand {
/**
* The source value operand of a `Store` instruction.
*/
class StoreValueOperand extends NonPhiOperand, RegisterOperand {
class StoreValueOperand extends RegisterOperand {
override StoreValueOperandTag tag;
override string toString() {
@@ -227,7 +270,7 @@ class StoreValueOperand extends NonPhiOperand, RegisterOperand {
/**
* The sole operand of a unary instruction (e.g. `Convert`, `Negate`, `Copy`).
*/
class UnaryOperand extends NonPhiOperand, RegisterOperand {
class UnaryOperand extends RegisterOperand {
override UnaryOperandTag tag;
override string toString() {
@@ -238,7 +281,7 @@ class UnaryOperand extends NonPhiOperand, RegisterOperand {
/**
* The left operand of a binary instruction (e.g. `Add`, `CompareEQ`).
*/
class LeftOperand extends NonPhiOperand, RegisterOperand {
class LeftOperand extends RegisterOperand {
override LeftOperandTag tag;
override string toString() {
@@ -249,7 +292,7 @@ class LeftOperand extends NonPhiOperand, RegisterOperand {
/**
* The right operand of a binary instruction (e.g. `Add`, `CompareEQ`).
*/
class RightOperand extends NonPhiOperand, RegisterOperand {
class RightOperand extends RegisterOperand {
override RightOperandTag tag;
override string toString() {
@@ -260,7 +303,7 @@ class RightOperand extends NonPhiOperand, RegisterOperand {
/**
* The condition operand of a `ConditionalBranch` or `Switch` instruction.
*/
class ConditionOperand extends NonPhiOperand, RegisterOperand {
class ConditionOperand extends RegisterOperand {
override ConditionOperandTag tag;
override string toString() {
@@ -272,7 +315,7 @@ class ConditionOperand extends NonPhiOperand, RegisterOperand {
* An operand of the special `UnmodeledUse` instruction, representing a value
* whose set of uses is unknown.
*/
class UnmodeledUseOperand extends NonPhiOperand, MemoryOperand {
class UnmodeledUseOperand extends NonPhiMemoryOperand {
override UnmodeledUseOperandTag tag;
override string toString() {
@@ -287,7 +330,7 @@ class UnmodeledUseOperand extends NonPhiOperand, MemoryOperand {
/**
* The operand representing the target function of an `Call` instruction.
*/
class CallTargetOperand extends NonPhiOperand, RegisterOperand {
class CallTargetOperand extends RegisterOperand {
override CallTargetOperandTag tag;
override string toString() {
@@ -300,7 +343,7 @@ class CallTargetOperand extends NonPhiOperand, RegisterOperand {
* positional arguments (represented by `PositionalArgumentOperand`) and the
* implicit `this` argument, if any (represented by `ThisArgumentOperand`).
*/
class ArgumentOperand extends NonPhiOperand, RegisterOperand {
class ArgumentOperand extends RegisterOperand {
override ArgumentOperandTag tag;
}
@@ -383,9 +426,10 @@ class PhiInputOperand extends MemoryOperand, TPhiOperand {
PhiInstruction useInstr;
Instruction defInstr;
IRBlock predecessorBlock;
Overlap overlap;
PhiInputOperand() {
this = TPhiOperand(useInstr, defInstr, predecessorBlock)
this = TPhiOperand(useInstr, defInstr, predecessorBlock, overlap)
}
override string toString() {
@@ -400,6 +444,10 @@ class PhiInputOperand extends MemoryOperand, TPhiOperand {
result = defInstr
}
override final Overlap getDefinitionOverlap() {
result = overlap
}
override final int getDumpSortOrder() {
result = 11 + getPredecessorBlock().getDisplayIndex()
}
@@ -423,10 +471,8 @@ class PhiInputOperand extends MemoryOperand, TPhiOperand {
/**
* The total operand of a Chi node, representing the previous value of the memory.
*/
class ChiTotalOperand extends MemoryOperand {
ChiTotalOperand() {
this = TNonPhiOperand(_, chiTotalOperand(), _)
}
class ChiTotalOperand extends NonPhiMemoryOperand {
override ChiTotalOperandTag tag;
override string toString() {
result = "ChiTotal"
@@ -441,10 +487,8 @@ class ChiTotalOperand extends MemoryOperand {
/**
* The partial operand of a Chi node, representing the value being written to part of the memory.
*/
class ChiPartialOperand extends MemoryOperand {
ChiPartialOperand() {
this = TNonPhiOperand(_, chiPartialOperand(), _)
}
class ChiPartialOperand extends NonPhiMemoryOperand {
override ChiPartialOperandTag tag;
override string toString() {
result = "ChiPartial"

View File

@@ -59,11 +59,17 @@ cached private module Cached {
)
}
cached Instruction getInstructionOperandDefinition(Instruction instruction, OperandTag tag) {
cached Instruction getRegisterOperandDefinition(Instruction instruction, RegisterOperandTag tag) {
result = getInstructionTranslatedElement(instruction).getInstructionOperand(
getInstructionTag(instruction), tag)
}
cached Instruction getMemoryOperandDefinition(Instruction instruction, MemoryOperandTag tag, Overlap overlap) {
result = getInstructionTranslatedElement(instruction).getInstructionOperand(
getInstructionTag(instruction), tag) and
overlap instanceof MustTotallyOverlap
}
cached Type getInstructionOperandType(Instruction instruction, TypedOperandTag tag) {
// For all `LoadInstruction`s, the operand type of the `LoadOperand` is the same as
// the result type of the load.
@@ -80,8 +86,7 @@ cached private module Cached {
getInstructionTag(instruction), tag)
}
cached Instruction getPhiInstructionOperandDefinition(Instruction instruction,
IRBlock predecessorBlock) {
cached Instruction getPhiOperandDefinition(PhiInstruction instruction, IRBlock predecessorBlock, Overlap overlap) {
none()
}

View File

@@ -50,11 +50,17 @@ module InstructionSanity {
/**
* Holds if instruction `instr` is missing an expected operand with tag `tag`.
*/
query predicate missingOperand(Instruction instr, OperandTag tag) {
expectsOperand(instr, tag) and
not exists(NonPhiOperand operand |
operand = instr.getAnOperand() and
operand.getOperandTag() = tag
query predicate missingOperand(Instruction instr, string message, IRFunction func, string funcText) {
exists(OperandTag tag |
expectsOperand(instr, tag) and
not exists(NonPhiOperand operand |
operand = instr.getAnOperand() and
operand.getOperandTag() = tag
) and
message = "Instruction '" + instr.getOpcode().toString() + "' is missing an expected operand with tag '" +
tag.toString() + "' in function '$@'." and
func = instr.getEnclosingIRFunction() and
funcText = getIdentityString(func.getFunction())
)
}
@@ -302,7 +308,7 @@ class Instruction extends Construction::TInstruction {
result = type
}
private string getResultTypeString() {
string getResultTypeString() {
exists(string valcat |
valcat = getValueCategoryString(getResultType().toString()) and
if (getResultType() instanceof UnknownType and

View File

@@ -3,14 +3,18 @@ import Instruction
import IRBlock
import cpp
import semmle.code.cpp.ir.implementation.MemoryAccessKind
import semmle.code.cpp.ir.internal.Overlap
private import semmle.code.cpp.ir.internal.OperandTag
private newtype TOperand =
TNonPhiOperand(Instruction useInstr, OperandTag tag, Instruction defInstr) {
defInstr = Construction::getInstructionOperandDefinition(useInstr, tag)
TRegisterOperand(Instruction useInstr, RegisterOperandTag tag, Instruction defInstr) {
defInstr = Construction::getRegisterOperandDefinition(useInstr, tag)
} or
TPhiOperand(PhiInstruction useInstr, Instruction defInstr, IRBlock predecessorBlock) {
defInstr = Construction::getPhiInstructionOperandDefinition(useInstr, predecessorBlock)
TNonPhiMemoryOperand(Instruction useInstr, MemoryOperandTag tag, Instruction defInstr, Overlap overlap) {
defInstr = Construction::getMemoryOperandDefinition(useInstr, tag, overlap)
} or
TPhiOperand(PhiInstruction useInstr, Instruction defInstr, IRBlock predecessorBlock, Overlap overlap) {
defInstr = Construction::getPhiOperandDefinition(useInstr, predecessorBlock, overlap)
}
/**
@@ -43,6 +47,20 @@ class Operand extends TOperand {
none()
}
/**
* Gets the overlap relationship between the operand's definition and its use.
*/
Overlap getDefinitionOverlap() {
none()
}
/**
* Holds if the result of the definition instruction does not exactly overlap this use.
*/
final predicate isDefinitionInexact() {
not getDefinitionOverlap() instanceof MustExactlyOverlap
}
/**
* Gets a prefix to use when dumping the operand in an operand list.
*/
@@ -58,7 +76,19 @@ class Operand extends TOperand {
* For example: `this:r3_5`
*/
final string getDumpString() {
result = getDumpLabel() + getDefinitionInstruction().getResultId()
result = getDumpLabel() + getInexactSpecifier() + getDefinitionInstruction().getResultId()
}
/**
* Gets a string prefix to prepend to the operand's definition ID in an IR dump, specifying whether the operand is
* an exact or inexact use of its definition. For an inexact use, the prefix is "~". For an exact use, the prefix is
* the empty string.
*/
private string getInexactSpecifier() {
if isDefinitionInexact() then
result = "~"
else
result = ""
}
/**
@@ -104,10 +134,8 @@ class Operand extends TOperand {
*/
class MemoryOperand extends Operand {
MemoryOperand() {
exists(MemoryOperandTag tag |
this = TNonPhiOperand(_, tag, _)
) or
this = TPhiOperand(_, _, _)
this = TNonPhiMemoryOperand(_, _, _, _) or
this = TPhiOperand(_, _, _, _)
}
override predicate isGLValue() {
@@ -133,27 +161,17 @@ class MemoryOperand extends Operand {
}
}
/**
* An operand that consumes a register (non-memory) result.
*/
class RegisterOperand extends Operand {
RegisterOperand() {
exists(RegisterOperandTag tag |
this = TNonPhiOperand(_, tag, _)
)
}
}
/**
* An operand that is not an operand of a `PhiInstruction`.
*/
class NonPhiOperand extends Operand, TNonPhiOperand {
class NonPhiOperand extends Operand {
Instruction useInstr;
Instruction defInstr;
OperandTag tag;
NonPhiOperand() {
this = TNonPhiOperand(useInstr, tag, defInstr)
this = TRegisterOperand(useInstr, tag, defInstr) or
this = TNonPhiMemoryOperand(useInstr, tag, defInstr, _)
}
override final Instruction getUseInstruction() {
@@ -177,7 +195,32 @@ class NonPhiOperand extends Operand, TNonPhiOperand {
}
}
class TypedOperand extends NonPhiOperand, MemoryOperand {
/**
* An operand that consumes a register (non-memory) result.
*/
class RegisterOperand extends NonPhiOperand, TRegisterOperand {
override RegisterOperandTag tag;
override final Overlap getDefinitionOverlap() {
// All register results overlap exactly with their uses.
result instanceof MustExactlyOverlap
}
}
class NonPhiMemoryOperand extends NonPhiOperand, MemoryOperand, TNonPhiMemoryOperand {
override MemoryOperandTag tag;
Overlap overlap;
NonPhiMemoryOperand() {
this = TNonPhiMemoryOperand(useInstr, tag, defInstr, overlap)
}
override final Overlap getDefinitionOverlap() {
result = overlap
}
}
class TypedOperand extends NonPhiMemoryOperand {
override TypedOperandTag tag;
override final Type getType() {
@@ -189,7 +232,7 @@ class TypedOperand extends NonPhiOperand, MemoryOperand {
* The address operand of an instruction that loads or stores a value from
* memory (e.g. `Load`, `Store`).
*/
class AddressOperand extends NonPhiOperand, RegisterOperand {
class AddressOperand extends RegisterOperand {
override AddressOperandTag tag;
override string toString() {
@@ -216,7 +259,7 @@ class LoadOperand extends TypedOperand {
/**
* The source value operand of a `Store` instruction.
*/
class StoreValueOperand extends NonPhiOperand, RegisterOperand {
class StoreValueOperand extends RegisterOperand {
override StoreValueOperandTag tag;
override string toString() {
@@ -227,7 +270,7 @@ class StoreValueOperand extends NonPhiOperand, RegisterOperand {
/**
* The sole operand of a unary instruction (e.g. `Convert`, `Negate`, `Copy`).
*/
class UnaryOperand extends NonPhiOperand, RegisterOperand {
class UnaryOperand extends RegisterOperand {
override UnaryOperandTag tag;
override string toString() {
@@ -238,7 +281,7 @@ class UnaryOperand extends NonPhiOperand, RegisterOperand {
/**
* The left operand of a binary instruction (e.g. `Add`, `CompareEQ`).
*/
class LeftOperand extends NonPhiOperand, RegisterOperand {
class LeftOperand extends RegisterOperand {
override LeftOperandTag tag;
override string toString() {
@@ -249,7 +292,7 @@ class LeftOperand extends NonPhiOperand, RegisterOperand {
/**
* The right operand of a binary instruction (e.g. `Add`, `CompareEQ`).
*/
class RightOperand extends NonPhiOperand, RegisterOperand {
class RightOperand extends RegisterOperand {
override RightOperandTag tag;
override string toString() {
@@ -260,7 +303,7 @@ class RightOperand extends NonPhiOperand, RegisterOperand {
/**
* The condition operand of a `ConditionalBranch` or `Switch` instruction.
*/
class ConditionOperand extends NonPhiOperand, RegisterOperand {
class ConditionOperand extends RegisterOperand {
override ConditionOperandTag tag;
override string toString() {
@@ -272,7 +315,7 @@ class ConditionOperand extends NonPhiOperand, RegisterOperand {
* An operand of the special `UnmodeledUse` instruction, representing a value
* whose set of uses is unknown.
*/
class UnmodeledUseOperand extends NonPhiOperand, MemoryOperand {
class UnmodeledUseOperand extends NonPhiMemoryOperand {
override UnmodeledUseOperandTag tag;
override string toString() {
@@ -287,7 +330,7 @@ class UnmodeledUseOperand extends NonPhiOperand, MemoryOperand {
/**
* The operand representing the target function of an `Call` instruction.
*/
class CallTargetOperand extends NonPhiOperand, RegisterOperand {
class CallTargetOperand extends RegisterOperand {
override CallTargetOperandTag tag;
override string toString() {
@@ -300,7 +343,7 @@ class CallTargetOperand extends NonPhiOperand, RegisterOperand {
* positional arguments (represented by `PositionalArgumentOperand`) and the
* implicit `this` argument, if any (represented by `ThisArgumentOperand`).
*/
class ArgumentOperand extends NonPhiOperand, RegisterOperand {
class ArgumentOperand extends RegisterOperand {
override ArgumentOperandTag tag;
}
@@ -383,9 +426,10 @@ class PhiInputOperand extends MemoryOperand, TPhiOperand {
PhiInstruction useInstr;
Instruction defInstr;
IRBlock predecessorBlock;
Overlap overlap;
PhiInputOperand() {
this = TPhiOperand(useInstr, defInstr, predecessorBlock)
this = TPhiOperand(useInstr, defInstr, predecessorBlock, overlap)
}
override string toString() {
@@ -400,6 +444,10 @@ class PhiInputOperand extends MemoryOperand, TPhiOperand {
result = defInstr
}
override final Overlap getDefinitionOverlap() {
result = overlap
}
override final int getDumpSortOrder() {
result = 11 + getPredecessorBlock().getDisplayIndex()
}
@@ -423,10 +471,8 @@ class PhiInputOperand extends MemoryOperand, TPhiOperand {
/**
* The total operand of a Chi node, representing the previous value of the memory.
*/
class ChiTotalOperand extends MemoryOperand {
ChiTotalOperand() {
this = TNonPhiOperand(_, chiTotalOperand(), _)
}
class ChiTotalOperand extends NonPhiMemoryOperand {
override ChiTotalOperandTag tag;
override string toString() {
result = "ChiTotal"
@@ -441,10 +487,8 @@ class ChiTotalOperand extends MemoryOperand {
/**
* The partial operand of a Chi node, representing the value being written to part of the memory.
*/
class ChiPartialOperand extends MemoryOperand {
ChiPartialOperand() {
this = TNonPhiOperand(_, chiPartialOperand(), _)
}
class ChiPartialOperand extends NonPhiMemoryOperand {
override ChiPartialOperandTag tag;
override string toString() {
result = "ChiPartial"

View File

@@ -1,6 +1,8 @@
private import SSAConstructionInternal
private import OldIR
private import Alias
private import SSAConstruction
private import DebugSSA
/**
* Property provide that dumps the memory access of each result. Useful for debugging SSA
@@ -8,13 +10,100 @@ private import Alias
*/
class PropertyProvider extends IRPropertyProvider {
override string getInstructionProperty(Instruction instruction, string key) {
(
key = "ResultMemoryAccess" and
result = getResultMemoryAccess(instruction).toString()
exists(MemoryLocation location |
location = getResultMemoryLocation(instruction) and
(
key = "ResultMemoryLocation" and result = location.toString() or
key = "ResultVirtualVariable" and result = location.getVirtualVariable().toString()
)
)
or
exists(MemoryLocation location |
location = getOperandMemoryLocation(instruction.getAnOperand()) and
(
key = "OperandMemoryAccess" and result = location.toString() or
key = "OperandVirtualVariable" and result = location.getVirtualVariable().toString()
)
) or
exists(MemoryLocation useLocation, IRBlock defBlock, int defRank, int defIndex |
hasDefinitionAtRank(useLocation, _, defBlock, defRank, defIndex) and
defBlock.getInstruction(defIndex) = instruction and
key = "DefinitionRank[" + useLocation.toString() + "]" and
result = defRank.toString()
) or
exists(MemoryLocation useLocation, IRBlock useBlock, int useRank |
hasUseAtRank(useLocation, useBlock, useRank, instruction) and
key = "UseRank[" + useLocation.toString() + "]" and
result = useRank.toString()
) or
exists(MemoryLocation useLocation, IRBlock defBlock, int defRank, int defIndex |
hasDefinitionAtRank(useLocation, _, defBlock, defRank, defIndex) and
defBlock.getInstruction(defIndex) = instruction and
key = "DefinitionReachesUse[" + useLocation.toString() + "]" and
result = strictconcat(IRBlock useBlock, int useRank, int useIndex |
exists(Instruction useInstruction |
hasUseAtRank(useLocation, useBlock, useRank, useInstruction) and
useBlock.getInstruction(useIndex) = useInstruction and
definitionReachesUse(useLocation, defBlock, defRank, useBlock, useRank)
) |
useBlock.getDisplayIndex().toString() + "_" + useIndex, ", " order by useBlock.getDisplayIndex(), useIndex
)
)
}
override string getBlockProperty(IRBlock block, string key) {
exists(MemoryLocation useLocation, int defRank, int defIndex |
hasDefinitionAtRank(useLocation, _, block, defRank, defIndex) and
defIndex = -1 and
key = "DefinitionRank(Phi)[" + useLocation.toString() + "]" and
result = defRank.toString()
) or
exists(MemoryLocation useLocation, MemoryLocation defLocation, int defRank, int defIndex |
hasDefinitionAtRank(useLocation, defLocation, block, defRank, defIndex) and
defIndex = -1 and
key = "DefinitionReachesUse(Phi)[" + useLocation.toString() + "]" and
result = strictconcat(IRBlock useBlock, int useRank, int useIndex |
exists(Instruction useInstruction |
hasUseAtRank(useLocation, useBlock, useRank, useInstruction) and
useBlock.getInstruction(useIndex) = useInstruction and
definitionReachesUse(useLocation, block, defRank, useBlock, useRank) and
exists(getOverlap(defLocation, useLocation))
) |
useBlock.getDisplayIndex().toString() + "_" + useIndex, ", " order by useBlock.getDisplayIndex(), useIndex
)
) or
exists(MemoryLocation useLocation, IRBlock predBlock, IRBlock defBlock, int defIndex, Overlap overlap |
hasPhiOperandDefinition(_, useLocation, block, predBlock, defBlock, defIndex, overlap) and
key = "PhiUse[" + useLocation.toString() + " from " + predBlock.getDisplayIndex().toString() + "]" and
result = defBlock.getDisplayIndex().toString() + "_" + defIndex + " (" + overlap.toString() + ")"
) or
(
key = "OperandMemoryAccess" and
result = getOperandMemoryAccess(instruction.getAnOperand().(MemoryOperand)).toString()
key = "LiveOnEntry" and
result = strictconcat(MemoryLocation useLocation |
locationLiveOnEntryToBlock(useLocation, block) |
useLocation.toString(), ", " order by useLocation.toString()
)
) or
(
key = "LiveOnExit" and
result = strictconcat(MemoryLocation useLocation |
locationLiveOnExitFromBlock(useLocation, block) |
useLocation.toString(), ", " order by useLocation.toString()
)
) or
(
key = "DefsLiveOnEntry" and
result = strictconcat(MemoryLocation defLocation |
definitionLiveOnEntryToBlock(defLocation, block) |
defLocation.toString(), ", " order by defLocation.toString()
)
) or
(
key = "DefsLiveOnExit" and
result = strictconcat(MemoryLocation defLocation |
definitionLiveOnExitFromBlock(defLocation, block) |
defLocation.toString(), ", " order by defLocation.toString()
)
)
}
}

View File

@@ -2,6 +2,7 @@ import SSAConstructionInternal
import cpp
private import semmle.code.cpp.ir.implementation.Opcode
private import semmle.code.cpp.ir.internal.OperandTag
private import semmle.code.cpp.ir.internal.Overlap
private import NewIR
private class OldBlock = Reachability::ReachableBlock;
@@ -24,21 +25,6 @@ cached private module Cached {
instr = WrappedInstruction(result)
}
private Instruction getNewInstruction(OldInstruction instr) {
getOldInstruction(result) = instr
}
/**
* Gets the chi node corresponding to `instr` if one is present, or the new `Instruction`
* corresponding to `instr` if there is no `Chi` node.
*/
private Instruction getNewFinalInstruction(OldInstruction instr) {
result = Chi(instr)
or
not exists(Chi(instr)) and
result = getNewInstruction(instr)
}
private IRVariable getNewIRVariable(OldIR::IRVariable var) {
// This is just a type cast. Both classes derive from the same newtype.
result = var
@@ -48,8 +34,8 @@ cached private module Cached {
WrappedInstruction(OldInstruction oldInstruction) {
not oldInstruction instanceof OldIR::PhiInstruction
} or
Phi(OldBlock block, Alias::VirtualVariable vvar) {
hasPhiNode(vvar, block)
Phi(OldBlock block, Alias::MemoryLocation defLocation) {
definitionHasPhiNode(defLocation, block)
} or
Chi(OldInstruction oldInstruction) {
not oldInstruction instanceof OldIR::PhiInstruction and
@@ -73,33 +59,41 @@ cached private module Cached {
}
cached predicate hasModeledMemoryResult(Instruction instruction) {
exists(Alias::getResultMemoryAccess(getOldInstruction(instruction))) or
exists(Alias::getResultMemoryLocation(getOldInstruction(instruction))) or
instruction instanceof PhiInstruction or // Phis always have modeled results
instruction instanceof ChiInstruction // Chis always have modeled results
}
cached Instruction getInstructionOperandDefinition(Instruction instruction, OperandTag tag) {
exists(OldInstruction oldInstruction, OldIR::NonPhiOperand oldOperand |
cached Instruction getRegisterOperandDefinition(Instruction instruction, RegisterOperandTag tag) {
exists(OldInstruction oldInstruction, OldIR::RegisterOperand oldOperand |
oldInstruction = getOldInstruction(instruction) and
oldOperand = oldInstruction.getAnOperand() and
tag = oldOperand.getOperandTag() and
if oldOperand instanceof OldIR::MemoryOperand then (
result = getNewInstruction(oldOperand.getDefinitionInstruction())
)
}
cached Instruction getMemoryOperandDefinition(Instruction instruction, MemoryOperandTag tag, Overlap overlap) {
exists(OldInstruction oldInstruction, OldIR::NonPhiMemoryOperand oldOperand |
oldInstruction = getOldInstruction(instruction) and
oldOperand = oldInstruction.getAnOperand() and
tag = oldOperand.getOperandTag() and
(
(
if exists(Alias::getOperandMemoryAccess(oldOperand)) then (
exists(OldBlock useBlock, int useRank, Alias::VirtualVariable vvar,
OldBlock defBlock, int defRank, int defIndex |
vvar = Alias::getOperandMemoryAccess(oldOperand).getVirtualVariable() and
hasDefinitionAtRank(vvar, defBlock, defRank, defIndex) and
hasUseAtRank(vvar, useBlock, useRank, oldInstruction) and
definitionReachesUse(vvar, defBlock, defRank, useBlock, useRank) and
if defIndex >= 0 then
result = getNewFinalInstruction(defBlock.getInstruction(defIndex))
else
result = Phi(defBlock, vvar)
if exists(Alias::getOperandMemoryLocation(oldOperand)) then (
exists(OldBlock useBlock, int useRank, Alias::MemoryLocation useLocation, Alias::MemoryLocation defLocation,
OldBlock defBlock, int defRank, int defOffset |
useLocation = Alias::getOperandMemoryLocation(oldOperand) and
hasDefinitionAtRank(useLocation, defLocation, defBlock, defRank, defOffset) and
hasUseAtRank(useLocation, useBlock, useRank, oldInstruction) and
definitionReachesUse(useLocation, defBlock, defRank, useBlock, useRank) and
overlap = Alias::getOverlap(defLocation, useLocation) and
result = getDefinitionOrChiInstruction(defBlock, defOffset, defLocation)
)
)
else (
result = instruction.getEnclosingIRFunction().getUnmodeledDefinitionInstruction()
result = instruction.getEnclosingIRFunction().getUnmodeledDefinitionInstruction() and
overlap instanceof MustTotallyOverlap
)
) or
// Connect any definitions that are not being modeled in SSA to the
@@ -108,24 +102,26 @@ cached private module Cached {
instruction instanceof UnmodeledUseInstruction and
tag instanceof UnmodeledUseOperandTag and
oldDefinition = oldOperand.getDefinitionInstruction() and
not exists(Alias::getResultMemoryAccess(oldDefinition)) and
result = getNewInstruction(oldDefinition)
not exists(Alias::getResultMemoryLocation(oldDefinition)) and
result = getNewInstruction(oldDefinition) and
overlap instanceof MustTotallyOverlap
)
)
else
result = getNewInstruction(oldOperand.getDefinitionInstruction())
) or
instruction = Chi(getOldInstruction(result)) and
tag instanceof ChiPartialOperandTag
tag instanceof ChiPartialOperandTag and
overlap instanceof MustExactlyOverlap
or
exists(IRFunction f |
tag instanceof UnmodeledUseOperandTag and
result = f.getUnmodeledDefinitionInstruction() and
instruction = f.getUnmodeledUseInstruction()
instruction = f.getUnmodeledUseInstruction() and
overlap instanceof MustTotallyOverlap
)
or
tag instanceof ChiTotalOperandTag and
result = getChiInstructionTotalOperand(instruction)
result = getChiInstructionTotalOperand(instruction) and
overlap instanceof MustExactlyOverlap
}
cached Type getInstructionOperandType(Instruction instr, TypedOperandTag tag) {
@@ -148,35 +144,26 @@ cached private module Cached {
)
}
cached Instruction getPhiInstructionOperandDefinition(PhiInstruction instr,
IRBlock newPredecessorBlock) {
exists(Alias::VirtualVariable vvar, OldBlock phiBlock,
OldBlock defBlock, int defRank, int defIndex, OldBlock predBlock |
hasPhiNode(vvar, phiBlock) and
predBlock = phiBlock.getAFeasiblePredecessor() and
instr = Phi(phiBlock, vvar) and
cached Instruction getPhiOperandDefinition(PhiInstruction instr,
IRBlock newPredecessorBlock, Overlap overlap) {
exists(Alias::MemoryLocation defLocation, Alias::MemoryLocation useLocation, OldBlock phiBlock, OldBlock predBlock,
OldBlock defBlock, int defOffset |
hasPhiOperandDefinition(defLocation, useLocation, phiBlock, predBlock, defBlock, defOffset, overlap) and
instr = Phi(phiBlock, useLocation) and
newPredecessorBlock = getNewBlock(predBlock) and
hasDefinitionAtRank(vvar, defBlock, defRank, defIndex) and
definitionReachesEndOfBlock(vvar, defBlock, defRank, predBlock) and
if defIndex >= 0 then
result = getNewFinalInstruction(defBlock.getInstruction(defIndex))
else
result = Phi(defBlock, vvar)
result = getDefinitionOrChiInstruction(defBlock, defOffset, defLocation)
)
}
cached Instruction getChiInstructionTotalOperand(ChiInstruction chiInstr) {
exists(Alias::VirtualVariable vvar, OldInstruction oldInstr, OldBlock defBlock,
int defRank, int defIndex, OldBlock useBlock, int useRank |
exists(Alias::VirtualVariable vvar, OldInstruction oldInstr, Alias::MemoryLocation defLocation, OldBlock defBlock,
int defRank, int defOffset, OldBlock useBlock, int useRank |
chiInstr = Chi(oldInstr) and
vvar = Alias::getResultMemoryAccess(oldInstr).getVirtualVariable() and
hasDefinitionAtRank(vvar, defBlock, defRank, defIndex) and
vvar = Alias::getResultMemoryLocation(oldInstr).getVirtualVariable() and
hasDefinitionAtRank(vvar, defLocation, defBlock, defRank, defOffset) and
hasUseAtRank(vvar, useBlock, useRank, oldInstr) and
definitionReachesUse(vvar, defBlock, defRank, useBlock, useRank) and
if defIndex >= 0 then
result = getNewFinalInstruction(defBlock.getInstruction(defIndex))
else
result = Phi(defBlock, vvar)
result = getDefinitionOrChiInstruction(defBlock, defOffset, vvar)
)
}
@@ -274,9 +261,9 @@ cached private module Cached {
isGLValue = false
)
or
exists(Alias::VirtualVariable vvar |
instruction = Phi(_, vvar) and
type = vvar.getType() and
exists(Alias::MemoryLocation location |
instruction = Phi(_, location) and
type = location.getType() and
isGLValue = false
)
or
@@ -372,97 +359,248 @@ cached private module Cached {
result = getNewInstruction(oldInstruction)
)
}
}
private predicate ssa_variableUpdate(Alias::VirtualVariable vvar,
OldBlock block, int index, OldInstruction instr) {
block.getInstruction(index) = instr and
Alias::getResultMemoryAccess(instr).getVirtualVariable() = vvar
}
private Instruction getNewInstruction(OldInstruction instr) {
getOldInstruction(result) = instr
}
private predicate hasDefinition(Alias::VirtualVariable vvar, OldBlock block, int index) {
(
hasPhiNode(vvar, block) and
index = -1
) or
exists(Alias::MemoryAccess access, OldInstruction def |
access = Alias::getResultMemoryAccess(def) and
block.getInstruction(index) = def and
vvar = access.getVirtualVariable()
/**
* Holds if instruction `def` needs to have a `Chi` instruction inserted after it, to account for a partial definition
* of a virtual variable. The `Chi` instruction provides a definition of the entire virtual variable of which the
* original definition location is a member.
*/
private predicate hasChiNode(Alias::VirtualVariable vvar, OldInstruction def) {
exists(Alias::MemoryLocation defLocation |
defLocation = Alias::getResultMemoryLocation(def) and
defLocation.getVirtualVariable() = vvar and
// If the definition totally (or exactly) overlaps the virtual variable, then there's no need for a `Chi`
// instruction.
Alias::getOverlap(defLocation, vvar) instanceof MayPartiallyOverlap
)
}
private import PhiInsertion
/**
* Module to handle insertion of `Phi` instructions at the correct blocks. We insert a `Phi` instruction at the
* beginning of a block for a given location when that block is on the dominance frontier of a definition of the
* location and there is a use of that location reachable from that block without an intervening definition of the
* location.
* Within the approach outlined above, we treat a location slightly differently depending on whether or not it is a
* virtual variable. For a virtual variable, we will insert a `Phi` instruction on the dominance frontier if there is
* a use of any member location of that virtual variable that is reachable from the `Phi` instruction. For a location
* that is not a virtual variable, we insert a `Phi` instruction only if there is an exactly-overlapping use of the
* location reachable from the `Phi` instruction. This ensures that we insert a `Phi` instruction for a non-virtual
* variable only if doing so would allow dataflow analysis to get a more precise result than if we just used a `Phi`
* instruction for the virtual variable as a whole.
*/
private module PhiInsertion {
/**
* Holds if a `Phi` instruction needs to be inserted for location `defLocation` at the beginning of block `phiBlock`.
*/
predicate definitionHasPhiNode(Alias::MemoryLocation defLocation, OldBlock phiBlock) {
exists(OldBlock defBlock |
phiBlock = Dominance::getDominanceFrontier(defBlock) and
definitionHasDefinitionInBlock(defLocation, defBlock) and
/* We can also eliminate those nodes where the definition is not live on any incoming edge */
definitionLiveOnEntryToBlock(defLocation, phiBlock)
)
}
private predicate defUseRank(Alias::VirtualVariable vvar, OldBlock block, int rankIndex, int index) {
index = rank[rankIndex](int j | hasDefinition(vvar, block, j) or hasUse(vvar, block, j, _))
}
private predicate hasUse(Alias::VirtualVariable vvar, OldBlock block, int index,
OldInstruction use) {
exists(Alias::MemoryAccess access |
/**
* Holds if the memory location `defLocation` has a definition in block `block`, either because of an existing
* instruction, a `Phi` node, or a `Chi` node.
*/
private predicate definitionHasDefinitionInBlock(Alias::MemoryLocation defLocation, OldBlock block) {
definitionHasPhiNode(defLocation, block) or
exists(OldInstruction def, Alias::MemoryLocation resultLocation |
def.getBlock() = block and
resultLocation = Alias::getResultMemoryLocation(def) and
(
access = Alias::getOperandMemoryAccess(use.getAnOperand())
or
/*
* a partial write to a virtual variable is going to generate a use of that variable when
* Chi nodes are inserted, so we need to mark it as a use in the old IR
*/
access = Alias::getResultMemoryAccess(use) and
access.isPartialMemoryAccess()
) and
block.getInstruction(index) = use and
vvar = access.getVirtualVariable()
defLocation = resultLocation or
// For a virtual variable, any definition of a member location will either generate a `Chi` node that defines
// the virtual variable, or will totally overlap the virtual variable. Either way, treat this as a definition of
// the virtual variable.
defLocation = resultLocation.getVirtualVariable()
)
)
}
private predicate variableLiveOnEntryToBlock(Alias::VirtualVariable vvar, OldBlock block) {
/**
* Holds if there is a use at (`block`, `index`) that could consume the result of a `Phi` instruction for
* `defLocation`.
*/
private predicate definitionHasUse(Alias::MemoryLocation defLocation, OldBlock block, int index) {
exists(OldInstruction use |
block.getInstruction(index) = use and
if defLocation instanceof Alias::VirtualVariable then (
exists(Alias::MemoryLocation useLocation |
// For a virtual variable, any use of a location that is a member of the virtual variable counts as a use.
useLocation = Alias::getOperandMemoryLocation(use.getAnOperand()) and
defLocation = useLocation.getVirtualVariable()
) or
// A `Chi` instruction consumes the enclosing virtual variable of its use location.
hasChiNode(defLocation, use)
)
else (
// For other locations, only an exactly-overlapping use of the same location counts as a use.
defLocation = Alias::getOperandMemoryLocation(use.getAnOperand()) and
Alias::getOverlap(defLocation, defLocation) instanceof MustExactlyOverlap
)
)
}
/**
* Holds if the location `defLocation` is redefined at (`block`, `index`). A location is considered "redefined" if
* there is a definition that would prevent a previous definition of `defLocation` from being consumed as the operand
* of a `Phi` node that occurs after the redefinition.
*/
private predicate definitionHasRedefinition(Alias::MemoryLocation defLocation, OldBlock block, int index) {
exists(OldInstruction redef, Alias::MemoryLocation redefLocation |
block.getInstruction(index) = redef and
redefLocation = Alias::getResultMemoryLocation(redef) and
if defLocation instanceof Alias::VirtualVariable then (
// For a virtual variable, the definition may be consumed by any use of a location that is a member of the
// virtual variable. Thus, the definition is live until a subsequent redefinition of the entire virtual
// variable.
exists(Overlap overlap |
overlap = Alias::getOverlap(redefLocation, defLocation) and
not overlap instanceof MayPartiallyOverlap
)
)
else (
// For other locations, the definition may only be consumed by an exactly-overlapping use of the same location.
// Thus, the definition is live until a subsequent definition of any location that may overlap the original
// definition location.
exists(Alias::getOverlap(redefLocation, defLocation))
)
)
}
/**
* Holds if the definition `defLocation` is live on entry to block `block`. The definition is live if there is at
* least one use of that definition before any intervening instruction that redefines the definition location.
*/
predicate definitionLiveOnEntryToBlock(Alias::MemoryLocation defLocation, OldBlock block) {
exists(int firstAccess |
hasUse(vvar, block, firstAccess, _) and
definitionHasUse(defLocation, block, firstAccess) and
firstAccess = min(int index |
hasUse(vvar, block, index, _)
definitionHasUse(defLocation, block, index)
or
ssa_variableUpdate(vvar, block, index, _)
definitionHasRedefinition(defLocation, block, index)
)
)
or
(variableLiveOnExitFromBlock(vvar, block) and not ssa_variableUpdate(vvar, block, _, _))
}
pragma[noinline]
private predicate variableLiveOnExitFromBlock(Alias::VirtualVariable vvar, OldBlock block) {
variableLiveOnEntryToBlock(vvar, block.getAFeasibleSuccessor())
(definitionLiveOnExitFromBlock(defLocation, block) and not definitionHasRedefinition(defLocation, block, _))
}
/**
* Gets the rank index of a hyphothetical use one instruction past the end of
* the block. This index can be used to determine if a definition reaches the
* end of the block, even if the definition is the last instruction in the
* block.
* Holds if the definition `defLocation` is live on exit from block `block`. The definition is live on exit if it is
* live on entry to any of the successors of `block`.
*/
private int exitRank(Alias::VirtualVariable vvar, OldBlock block) {
result = max(int rankIndex | defUseRank(vvar, block, rankIndex, _)) + 1
pragma[noinline]
predicate definitionLiveOnExitFromBlock(Alias::MemoryLocation defLocation, OldBlock block) {
definitionLiveOnEntryToBlock(defLocation, block.getAFeasibleSuccessor())
}
}
private predicate hasDefinitionAtRank(Alias::VirtualVariable vvar, OldBlock block, int rankIndex,
int instructionIndex) {
hasDefinition(vvar, block, instructionIndex) and
defUseRank(vvar, block, rankIndex, instructionIndex)
}
private import DefUse
private predicate hasUseAtRank(Alias::VirtualVariable vvar, OldBlock block, int rankIndex,
OldInstruction use) {
exists(int index |
hasUse(vvar, block, index, use) and
defUseRank(vvar, block, rankIndex, index)
/**
* Module containing the predicates that connect uses to their reaching definition. The reaching definitions are
* computed separately for each unique use `MemoryLocation`. An instruction is treated as a definition of a use location
* if the defined location overlaps the use location in any way. Thus, a single instruction may serve as a definition
* for multiple use locations, since a single definition location may overlap many use locations.
*
* Definitions and uses are identified by a block and an integer "offset". An offset of -1 indicates the definition
* from a `Phi` instruction at the beginning of the block. An offset of 2*i indicates a definition or use on the
* instruction at index `i` in the block. An offset of 2*i+1 indicates a definition or use on the `Chi` instruction that
* will be inserted immediately after the instruction at index `i` in the block.
*
* For a given use location, each definition and use is also assigned a "rank" within its block. The rank is simply the
* one-based index of that definition or use within the list of definitions and uses of that location within the block,
* ordered by offset. The rank allows the various reachability predicates to be computed more efficiently than they
* would if based solely on offset, since the set of possible ranks is dense while the set of possible offsets is
* potentially very sparse.
*/
module DefUse {
/**
* Gets the `Instruction` for the definition at offset `defOffset` in block `defBlock`.
*/
pragma[inline]
bindingset[defOffset, defLocation]
Instruction getDefinitionOrChiInstruction(OldBlock defBlock, int defOffset,
Alias::MemoryLocation defLocation) {
(
defOffset >= 0 and
exists(OldInstruction oldInstr |
oldInstr = defBlock.getInstruction(defOffset / 2) and
if (defOffset % 2) > 0 then (
// An odd offset corresponds to the `Chi` instruction.
result = Chi(oldInstr)
)
else (
// An even offset corresponds to the original instruction.
result = getNewInstruction(oldInstr)
)
)
) or
(
defOffset < 0 and
result = Phi(defBlock, defLocation)
)
}
/**
* Holds if the definition of `vvar` at `(block, defRank)` reaches the rank
* Gets the rank index of a hyphothetical use one instruction past the end of
* the block. This index can be used to determine if a definition reaches the
* end of the block, even if the definition is the last instruction in the
* block.
*/
private int exitRank(Alias::MemoryLocation useLocation, OldBlock block) {
result = max(int rankIndex | defUseRank(useLocation, block, rankIndex, _)) + 1
}
/**
* Holds if a definition that overlaps `useLocation` at (`defBlock`, `defRank`) reaches the use of `useLocation` at
* (`useBlock`, `useRank`) without any intervening definitions that overlap `useLocation`, where `defBlock` and
* `useBlock` are the same block.
*/
private predicate definitionReachesUseWithinBlock(Alias::MemoryLocation useLocation, OldBlock defBlock,
int defRank, OldBlock useBlock, int useRank) {
defBlock = useBlock and
hasDefinitionAtRank(useLocation, _, defBlock, defRank, _) and
hasUseAtRank(useLocation, useBlock, useRank, _) and
definitionReachesRank(useLocation, defBlock, defRank, useRank)
}
/**
* Holds if a definition that overlaps `useLocation` at (`defBlock`, `defRank`) reaches the use of `useLocation` at
* (`useBlock`, `useRank`) without any intervening definitions that overlap `useLocation`.
*/
predicate definitionReachesUse(Alias::MemoryLocation useLocation, OldBlock defBlock,
int defRank, OldBlock useBlock, int useRank) {
hasUseAtRank(useLocation, useBlock, useRank, _) and
(
definitionReachesUseWithinBlock(useLocation, defBlock, defRank, useBlock,
useRank) or
(
definitionReachesEndOfBlock(useLocation, defBlock, defRank,
useBlock.getAFeasiblePredecessor()) and
not definitionReachesUseWithinBlock(useLocation, useBlock, _, useBlock, useRank)
)
)
}
/**
* Holds if the definition that overlaps `useLocation` at `(block, defRank)` reaches the rank
* index `reachesRank` in block `block`.
*/
private predicate definitionReachesRank(Alias::VirtualVariable vvar, OldBlock block, int defRank,
private predicate definitionReachesRank(Alias::MemoryLocation useLocation, OldBlock block, int defRank,
int reachesRank) {
hasDefinitionAtRank(vvar, block, defRank, _) and
reachesRank <= exitRank(vvar, block) and // Without this, the predicate would be infinite.
hasDefinitionAtRank(useLocation, _, block, defRank, _) and
reachesRank <= exitRank(useLocation, block) and // Without this, the predicate would be infinite.
(
// The def always reaches the next use, even if there is also a def on the
// use instruction.
@@ -470,87 +608,178 @@ cached private module Cached {
(
// If the def reached the previous rank, it also reaches the current rank,
// unless there was another def at the previous rank.
definitionReachesRank(vvar, block, defRank, reachesRank - 1) and
not hasDefinitionAtRank(vvar, block, reachesRank - 1, _)
definitionReachesRank(useLocation, block, defRank, reachesRank - 1) and
not hasDefinitionAtRank(useLocation, _, block, reachesRank - 1, _)
)
)
}
/**
* Holds if the definition of `vvar` at `(defBlock, defRank)` reaches the end of
* block `block`.
*/
private predicate definitionReachesEndOfBlock(Alias::VirtualVariable vvar, OldBlock defBlock,
* Holds if the definition that overlaps `useLocation` at `(defBlock, defRank)` reaches the end of
* block `block` without any intervening definitions that overlap `useLocation`.
*/
predicate definitionReachesEndOfBlock(Alias::MemoryLocation useLocation, OldBlock defBlock,
int defRank, OldBlock block) {
hasDefinitionAtRank(vvar, defBlock, defRank, _) and
hasDefinitionAtRank(useLocation, _, defBlock, defRank, _) and
(
(
// If we're looking at the def's own block, just see if it reaches the exit
// rank of the block.
block = defBlock and
variableLiveOnExitFromBlock(vvar, defBlock) and
definitionReachesRank(vvar, defBlock, defRank, exitRank(vvar, defBlock))
locationLiveOnExitFromBlock(useLocation, defBlock) and
definitionReachesRank(useLocation, defBlock, defRank, exitRank(useLocation, defBlock))
) or
exists(OldBlock idom |
definitionReachesEndOfBlock(vvar, defBlock, defRank, idom) and
noDefinitionsSinceIDominator(vvar, idom, block)
definitionReachesEndOfBlock(useLocation, defBlock, defRank, idom) and
noDefinitionsSinceIDominator(useLocation, idom, block)
)
)
}
pragma[noinline]
private predicate noDefinitionsSinceIDominator(Alias::VirtualVariable vvar, OldBlock idom,
private predicate noDefinitionsSinceIDominator(Alias::MemoryLocation useLocation, OldBlock idom,
OldBlock block) {
Dominance::blockImmediatelyDominates(idom, block) and // It is sufficient to traverse the dominator graph, cf. discussion above.
variableLiveOnExitFromBlock(vvar, block) and
not hasDefinition(vvar, block, _)
locationLiveOnExitFromBlock(useLocation, block) and
not hasDefinition(useLocation, _, block, _)
}
private predicate definitionReachesUseWithinBlock(Alias::VirtualVariable vvar, OldBlock defBlock,
int defRank, OldBlock useBlock, int useRank) {
defBlock = useBlock and
hasDefinitionAtRank(vvar, defBlock, defRank, _) and
hasUseAtRank(vvar, useBlock, useRank, _) and
definitionReachesRank(vvar, defBlock, defRank, useRank)
/**
* Holds if the specified `useLocation` is live on entry to `block`. This holds if there is a use of `useLocation`
* that is reachable from the start of `block` without passing through a definition that overlaps `useLocation`.
* Note that even a partially-overlapping definition blocks liveness, because such a definition will insert a `Chi`
* instruction whose result totally overlaps the location.
*/
predicate locationLiveOnEntryToBlock(Alias::MemoryLocation useLocation, OldBlock block) {
definitionHasPhiNode(useLocation, block) or
exists(int firstAccess |
hasUse(useLocation, block, firstAccess, _) and
firstAccess = min(int offset |
hasUse(useLocation, block, offset, _)
or
hasNonPhiDefinition(useLocation, _, block, offset)
)
) or
(locationLiveOnExitFromBlock(useLocation, block) and not hasNonPhiDefinition(useLocation, _, block, _))
}
private predicate definitionReachesUse(Alias::VirtualVariable vvar, OldBlock defBlock,
int defRank, OldBlock useBlock, int useRank) {
hasUseAtRank(vvar, useBlock, useRank, _) and
/**
* Holds if the specified `useLocation` is live on exit from `block`.
*/
pragma[noinline]
predicate locationLiveOnExitFromBlock(Alias::MemoryLocation useLocation, OldBlock block) {
locationLiveOnEntryToBlock(useLocation, block.getAFeasibleSuccessor())
}
/**
* Holds if there is a definition at offset `offset` in block `block` that overlaps memory location `useLocation`.
* This predicate does not include definitions for Phi nodes.
*/
private predicate hasNonPhiDefinition(Alias::MemoryLocation useLocation, Alias::MemoryLocation defLocation,
OldBlock block, int offset) {
exists(OldInstruction def, Overlap overlap, int index |
defLocation = Alias::getResultMemoryLocation(def) and
block.getInstruction(index) = def and
overlap = Alias::getOverlap(defLocation, useLocation) and
if overlap instanceof MayPartiallyOverlap then
offset = (index * 2) + 1 // The use will be connected to the definition on the `Chi` instruction.
else
offset = index * 2 // The use will be connected to the definition on the original instruction.
)
}
/**
* Holds if there is a definition at offset `offset` in block `block` that overlaps memory location `useLocation`.
* This predicate includes definitions for Phi nodes (at offset -1).
*/
private predicate hasDefinition(Alias::MemoryLocation useLocation, Alias::MemoryLocation defLocation, OldBlock block,
int offset) {
(
definitionReachesUseWithinBlock(vvar, defBlock, defRank, useBlock,
useRank) or
// If there is a Phi node for the use location itself, treat that as a definition at offset -1.
offset = -1 and
if definitionHasPhiNode(useLocation, block) then (
defLocation = useLocation
)
else (
definitionHasPhiNode(defLocation, block) and
defLocation = useLocation.getVirtualVariable()
)
) or
hasNonPhiDefinition(useLocation, defLocation, block, offset)
}
/**
* Holds if there is a definition at offset `offset` in block `block` that overlaps memory location `useLocation`.
* `rankIndex` is the rank of the definition as computed by `defUseRank()`.
*/
predicate hasDefinitionAtRank(Alias::MemoryLocation useLocation, Alias::MemoryLocation defLocation,
OldBlock block, int rankIndex, int offset) {
hasDefinition(useLocation, defLocation, block, offset) and
defUseRank(useLocation, block, rankIndex, offset)
}
/**
* Holds if there is a use of `useLocation` on instruction `use` at offset `offset` in block `block`.
*/
private predicate hasUse(Alias::MemoryLocation useLocation, OldBlock block, int offset, OldInstruction use) {
exists(int index |
block.getInstruction(index) = use and
(
definitionReachesEndOfBlock(vvar, defBlock, defRank,
useBlock.getAFeasiblePredecessor()) and
not definitionReachesUseWithinBlock(vvar, useBlock, _, useBlock, useRank)
// A direct use of the location.
useLocation = Alias::getOperandMemoryLocation(use.getAnOperand()) and offset = index * 2 or
// A `Chi` instruction will include a use of the virtual variable.
hasChiNode(useLocation, use) and offset = (index * 2) + 1
)
)
}
private predicate hasFrontierPhiNode(Alias::VirtualVariable vvar, OldBlock phiBlock) {
exists(OldBlock defBlock |
phiBlock = Dominance::getDominanceFrontier(defBlock) and
hasDefinition(vvar, defBlock, _) and
/* We can also eliminate those nodes where the variable is not live on any incoming edge */
variableLiveOnEntryToBlock(vvar, phiBlock)
/**
* Holds if there is a use of memory location `useLocation` on instruction `use` in block `block`. `rankIndex` is the
* rank of the use use as computed by `defUseRank`.
*/
predicate hasUseAtRank(Alias::MemoryLocation useLocation, OldBlock block, int rankIndex, OldInstruction use) {
exists(int offset |
hasUse(useLocation, block, offset, use) and
defUseRank(useLocation, block, rankIndex, offset)
)
}
private predicate hasPhiNode(Alias::VirtualVariable vvar, OldBlock phiBlock) {
hasFrontierPhiNode(vvar, phiBlock)
//or ssa_sanitized_custom_phi_node(vvar, block)
/**
* Holds if there is a definition at offset `offset` in block `block` that overlaps memory location `useLocation`, or
* a use of `useLocation` at offset `offset` in block `block`. `rankIndex` is the sequence number of the definition
* or use within `block`, counting only uses of `useLocation` and definitions that overlap `useLocation`.
*/
private predicate defUseRank(Alias::MemoryLocation useLocation, OldBlock block, int rankIndex, int offset) {
offset = rank[rankIndex](int j | hasDefinition(useLocation, _, block, j) or hasUse(useLocation, block, j, _))
}
private predicate hasChiNode(Alias::VirtualVariable vvar, OldInstruction def) {
exists(Alias::MemoryAccess ma |
ma = Alias::getResultMemoryAccess(def) and
ma.isPartialMemoryAccess() and
ma.getVirtualVariable() = vvar
/**
* Holds if the `Phi` instruction for location `useLocation` at the beginning of block `phiBlock` has an operand along
* the incoming edge from `predBlock`, where that operand's definition is at offset `defOffset` in block `defBlock`,
* and overlaps the use operand with overlap relationship `overlap`.
*/
pragma[inline]
predicate hasPhiOperandDefinition(Alias::MemoryLocation defLocation, Alias::MemoryLocation useLocation,
OldBlock phiBlock, OldBlock predBlock, OldBlock defBlock, int defOffset, Overlap overlap) {
exists(int defRank |
definitionHasPhiNode(useLocation, phiBlock) and
predBlock = phiBlock.getAFeasiblePredecessor() and
hasDefinitionAtRank(useLocation, defLocation, defBlock, defRank, defOffset) and
definitionReachesEndOfBlock(useLocation, defBlock, defRank, predBlock) and
overlap = Alias::getOverlap(defLocation, useLocation)
)
}
}
/**
* Expose some of the internal predicates to PrintSSA.qll. We do this by publically importing those modules in the
* `DebugSSA` module, which is then imported by PrintSSA.
*/
module DebugSSA {
import PhiInsertion
import DefUse
}
import CachedForDebugging
cached private module CachedForDebugging {
cached string getTempVariableUniqueId(IRTempVariable var) {
@@ -562,9 +791,16 @@ cached private module CachedForDebugging {
oldInstr = getOldInstruction(instr) and
result = "NonSSA: " + oldInstr.getUniqueId()
) or
exists(Alias::VirtualVariable vvar, OldBlock phiBlock |
instr = Phi(phiBlock, vvar) and
result = "Phi Block(" + phiBlock.getUniqueId() + "): " + vvar.getUniqueId()
exists(Alias::MemoryLocation location, OldBlock phiBlock, string specificity |
instr = Phi(phiBlock, location) and
result = "Phi Block(" + phiBlock.getUniqueId() + ")[" + specificity + "]: " + location.getUniqueId() and
if location instanceof Alias::VirtualVariable then (
// Sort Phi nodes for virtual variables before Phi nodes for member locations.
specificity = "g"
)
else (
specificity = "s"
)
) or
(
instr = Unreached(_) and

View File

@@ -38,20 +38,20 @@ private predicate isVariableModeled(IRVariable var) {
)
}
private newtype TVirtualVariable =
MkVirtualVariable(IRVariable var) {
private newtype TMemoryLocation =
MkMemoryLocation(IRVariable var) {
isVariableModeled(var)
}
private VirtualVariable getVirtualVariable(IRVariable var) {
private MemoryLocation getMemoryLocation(IRVariable var) {
result.getIRVariable() = var
}
class VirtualVariable extends TVirtualVariable {
class MemoryLocation extends TMemoryLocation {
IRVariable var;
VirtualVariable() {
this = MkVirtualVariable(var)
MemoryLocation() {
this = MkMemoryLocation(var)
}
final string toString() {
@@ -62,6 +62,10 @@ class VirtualVariable extends TVirtualVariable {
result = var
}
final VirtualVariable getVirtualVariable() {
result = this
}
final Type getType() {
result = var.getType()
}
@@ -71,50 +75,25 @@ class VirtualVariable extends TVirtualVariable {
}
}
private newtype TMemoryAccess =
MkMemoryAccess(VirtualVariable vvar)
private MemoryAccess getMemoryAccess(IRVariable var) {
result.getVirtualVariable() = getVirtualVariable(var)
class VirtualVariable extends MemoryLocation {
}
class MemoryAccess extends TMemoryAccess {
VirtualVariable vvar;
MemoryAccess() {
this = MkMemoryAccess(vvar)
}
string toString() {
result = vvar.toString()
}
VirtualVariable getVirtualVariable() {
result = vvar
}
predicate isPartialMemoryAccess() {
none()
}
}
Overlap getOverlap(MemoryAccess def, MemoryAccess use) {
def.getVirtualVariable() = use.getVirtualVariable() and
result instanceof MustExactlyOverlap
Overlap getOverlap(MemoryLocation def, MemoryLocation use) {
def = use and result instanceof MustExactlyOverlap
or
none() // Avoid compiler error in SSAConstruction
}
MemoryAccess getResultMemoryAccess(Instruction instr) {
MemoryLocation getResultMemoryLocation(Instruction instr) {
exists(IRVariable var |
hasResultMemoryAccess(instr, var, _, _) and
result = getMemoryAccess(var)
result = getMemoryLocation(var)
)
}
MemoryAccess getOperandMemoryAccess(MemoryOperand operand) {
MemoryLocation getOperandMemoryLocation(MemoryOperand operand) {
exists(IRVariable var |
hasOperandMemoryAccess(operand, var, _, _) and
result = getMemoryAccess(var)
result = getMemoryLocation(var)
)
}

View File

@@ -78,6 +78,10 @@ class AddressOperandTag extends RegisterOperandTag, TAddressOperand {
override final int getSortOrder() {
result = 0
}
override final string getLabel() {
result = "&:"
}
}
AddressOperandTag addressOperand() {
@@ -247,6 +251,10 @@ class CallTargetOperandTag extends RegisterOperandTag, TCallTargetOperand {
override final int getSortOrder() {
result = 10
}
override final string getLabel() {
result = "func:"
}
}
CallTargetOperandTag callTargetOperand() {
@@ -306,6 +314,10 @@ class PositionalArgumentOperandTag extends ArgumentOperandTag,
result = 12 + argIndex
}
override final string getLabel() {
result = argIndex.toString() + ":"
}
final int getArgIndex() {
result = argIndex
}
@@ -323,6 +335,10 @@ class ChiTotalOperandTag extends MemoryOperandTag, TChiTotalOperand {
override final int getSortOrder() {
result = 13
}
override final string getLabel() {
result = "total:"
}
}
ChiTotalOperandTag chiTotalOperand() {
@@ -337,6 +353,10 @@ class ChiPartialOperandTag extends MemoryOperandTag, TChiPartialOperand {
override final int getSortOrder() {
result = 14
}
override final string getLabel() {
result = "partial:"
}
}
ChiPartialOperandTag chiPartialOperand() {

View File

@@ -140,17 +140,17 @@ void following_pointers(
sink(sourceStruct1.m1); // flow (due to lack of no-alias tracking)
twoIntFields s = { source(), source() };
// TODO: fix this by distinguishing between an AggregateLiteral that
// initializes an array and one that initializes a struct.
sink(s.m2); // no flow (due to limitations of the analysis)
sink(s.m2); // flow (AST dataflow misses this due to limitations of the analysis)
twoIntFields sArray[1] = { { source(), source() } };
// TODO: fix this like above
sink(sArray[0].m2); // no flow (due to limitations of the analysis)
twoIntFields sSwapped = { .m2 = source(), .m1 = 0 };
// TODO: fix this like above
sink(sSwapped.m2); // no flow (due to limitations of the analysis)
sink(sSwapped.m2); // flow (AST dataflow misses this due to limitations of the analysis)
sink(sourceFunctionPointer()); // no flow

View File

@@ -1,11 +1,12 @@
| test.cpp:89:28:89:34 | test.cpp:92:8:92:14 | IR only |
| test.cpp:100:13:100:18 | test.cpp:103:10:103:12 | AST only |
| test.cpp:109:9:109:14 | test.cpp:110:10:110:12 | IR only |
| test.cpp:120:9:120:20 | test.cpp:126:8:126:19 | AST only |
| test.cpp:122:18:122:30 | test.cpp:132:22:132:23 | IR only |
| test.cpp:122:18:122:30 | test.cpp:140:22:140:23 | IR only |
| test.cpp:136:27:136:32 | test.cpp:137:27:137:28 | AST only |
| test.cpp:136:27:136:32 | test.cpp:140:22:140:23 | AST only |
| test.cpp:142:32:142:37 | test.cpp:145:10:145:11 | IR only |
| test.cpp:151:35:151:40 | test.cpp:153:17:153:18 | IR only |
| test.cpp:395:17:395:22 | test.cpp:397:10:397:18 | AST only |
| test.cpp:421:13:421:18 | test.cpp:423:10:423:14 | AST only |
| test.cpp:430:48:430:54 | test.cpp:433:8:433:10 | AST only |

View File

@@ -15,8 +15,12 @@
| test.cpp:90:8:90:14 | Load: source1 | test.cpp:89:28:89:34 | InitializeParameter: source1 |
| test.cpp:92:8:92:14 | Load: source1 | test.cpp:89:28:89:34 | InitializeParameter: source1 |
| test.cpp:110:10:110:12 | Load: (reference dereference) | test.cpp:109:9:109:14 | Call: call to source |
| test.cpp:126:8:126:19 | Convert: (const int *)... | test.cpp:120:9:120:20 | InitializeParameter: sourceArray1 |
| test.cpp:126:8:126:19 | Load: sourceArray1 | test.cpp:120:9:120:20 | InitializeParameter: sourceArray1 |
| test.cpp:132:22:132:23 | Load: m1 | test.cpp:122:18:122:30 | InitializeParameter: sourceStruct1 |
| test.cpp:140:22:140:23 | Load: m1 | test.cpp:122:18:122:30 | InitializeParameter: sourceStruct1 |
| test.cpp:145:10:145:11 | Load: m2 | test.cpp:142:32:142:37 | Call: call to source |
| test.cpp:153:17:153:18 | Load: m2 | test.cpp:151:35:151:40 | Call: call to source |
| test.cpp:188:8:188:8 | Load: y | test.cpp:186:27:186:32 | Call: call to source |
| test.cpp:192:8:192:8 | Load: s | test.cpp:199:33:199:38 | Call: call to source |
| test.cpp:200:8:200:8 | Load: y | test.cpp:199:33:199:38 | Call: call to source |

File diff suppressed because it is too large Load Diff

View File

@@ -5,84 +5,84 @@ ssa.cpp:
# 13| m0_1(unknown) = AliasedDefinition :
# 13| mu0_2(unknown) = UnmodeledDefinition :
# 13| r0_3(glval<Point *>) = VariableAddress[p] :
# 13| m0_4(Point *) = InitializeParameter[p] : r0_3
# 13| m0_4(Point *) = InitializeParameter[p] : &:r0_3
# 13| r0_5(glval<bool>) = VariableAddress[which1] :
# 13| m0_6(bool) = InitializeParameter[which1] : r0_5
# 13| m0_6(bool) = InitializeParameter[which1] : &:r0_5
# 13| r0_7(glval<bool>) = VariableAddress[which2] :
# 13| m0_8(bool) = InitializeParameter[which2] : r0_7
# 13| m0_8(bool) = InitializeParameter[which2] : &:r0_7
# 14| r0_9(glval<bool>) = VariableAddress[which1] :
# 14| r0_10(bool) = Load : r0_9, m0_6
# 14| r0_10(bool) = Load : &:r0_9, m0_6
# 14| v0_11(void) = ConditionalBranch : r0_10
#-----| False -> Block 2
#-----| True -> Block 1
# 15| Block 1
# 15| r1_0(glval<Point *>) = VariableAddress[p] :
# 15| r1_1(Point *) = Load : r1_0, m0_4
# 15| r1_1(Point *) = Load : &:r1_0, m0_4
# 15| r1_2(glval<int>) = FieldAddress[x] : r1_1
# 15| r1_3(int) = Load : r1_2, m0_1
# 15| r1_3(int) = Load : &:r1_2, ~m0_1
# 15| r1_4(int) = Constant[1] :
# 15| r1_5(int) = Add : r1_3, r1_4
# 15| m1_6(int) = Store : r1_2, r1_5
# 15| m1_7(unknown) = Chi : m0_1, m1_6
# 15| m1_6(int) = Store : &:r1_2, r1_5
# 15| m1_7(unknown) = Chi : total:m0_1, partial:m1_6
#-----| Goto -> Block 3
# 18| Block 2
# 18| r2_0(glval<Point *>) = VariableAddress[p] :
# 18| r2_1(Point *) = Load : r2_0, m0_4
# 18| r2_1(Point *) = Load : &:r2_0, m0_4
# 18| r2_2(glval<int>) = FieldAddress[y] : r2_1
# 18| r2_3(int) = Load : r2_2, m0_1
# 18| r2_3(int) = Load : &:r2_2, ~m0_1
# 18| r2_4(int) = Constant[1] :
# 18| r2_5(int) = Add : r2_3, r2_4
# 18| m2_6(int) = Store : r2_2, r2_5
# 18| m2_7(unknown) = Chi : m0_1, m2_6
# 18| m2_6(int) = Store : &:r2_2, r2_5
# 18| m2_7(unknown) = Chi : total:m0_1, partial:m2_6
#-----| Goto -> Block 3
# 21| Block 3
# 21| m3_0(unknown) = Phi : from 1:m1_7, from 2:m2_7
# 21| m3_0(unknown) = Phi : from 1:~m1_7, from 2:~m2_7
# 21| r3_1(glval<bool>) = VariableAddress[which2] :
# 21| r3_2(bool) = Load : r3_1, m0_8
# 21| r3_2(bool) = Load : &:r3_1, m0_8
# 21| v3_3(void) = ConditionalBranch : r3_2
#-----| False -> Block 5
#-----| True -> Block 4
# 22| Block 4
# 22| r4_0(glval<Point *>) = VariableAddress[p] :
# 22| r4_1(Point *) = Load : r4_0, m0_4
# 22| r4_1(Point *) = Load : &:r4_0, m0_4
# 22| r4_2(glval<int>) = FieldAddress[x] : r4_1
# 22| r4_3(int) = Load : r4_2, m3_0
# 22| r4_3(int) = Load : &:r4_2, ~m3_0
# 22| r4_4(int) = Constant[1] :
# 22| r4_5(int) = Add : r4_3, r4_4
# 22| m4_6(int) = Store : r4_2, r4_5
# 22| m4_7(unknown) = Chi : m3_0, m4_6
# 22| m4_6(int) = Store : &:r4_2, r4_5
# 22| m4_7(unknown) = Chi : total:m3_0, partial:m4_6
#-----| Goto -> Block 6
# 25| Block 5
# 25| r5_0(glval<Point *>) = VariableAddress[p] :
# 25| r5_1(Point *) = Load : r5_0, m0_4
# 25| r5_1(Point *) = Load : &:r5_0, m0_4
# 25| r5_2(glval<int>) = FieldAddress[y] : r5_1
# 25| r5_3(int) = Load : r5_2, m3_0
# 25| r5_3(int) = Load : &:r5_2, ~m3_0
# 25| r5_4(int) = Constant[1] :
# 25| r5_5(int) = Add : r5_3, r5_4
# 25| m5_6(int) = Store : r5_2, r5_5
# 25| m5_7(unknown) = Chi : m3_0, m5_6
# 25| m5_6(int) = Store : &:r5_2, r5_5
# 25| m5_7(unknown) = Chi : total:m3_0, partial:m5_6
#-----| Goto -> Block 6
# 28| Block 6
# 28| m6_0(unknown) = Phi : from 4:m4_7, from 5:m5_7
# 28| m6_0(unknown) = Phi : from 4:~m4_7, from 5:~m5_7
# 28| r6_1(glval<int>) = VariableAddress[#return] :
# 28| r6_2(glval<Point *>) = VariableAddress[p] :
# 28| r6_3(Point *) = Load : r6_2, m0_4
# 28| r6_3(Point *) = Load : &:r6_2, m0_4
# 28| r6_4(glval<int>) = FieldAddress[x] : r6_3
# 28| r6_5(int) = Load : r6_4, m6_0
# 28| r6_5(int) = Load : &:r6_4, ~m6_0
# 28| r6_6(glval<Point *>) = VariableAddress[p] :
# 28| r6_7(Point *) = Load : r6_6, m0_4
# 28| r6_7(Point *) = Load : &:r6_6, m0_4
# 28| r6_8(glval<int>) = FieldAddress[y] : r6_7
# 28| r6_9(int) = Load : r6_8, m6_0
# 28| r6_9(int) = Load : &:r6_8, ~m6_0
# 28| r6_10(int) = Add : r6_5, r6_9
# 28| m6_11(int) = Store : r6_1, r6_10
# 28| m6_11(int) = Store : &:r6_1, r6_10
# 13| r6_12(glval<int>) = VariableAddress[#return] :
# 13| v6_13(void) = ReturnValue : r6_12, m6_11
# 13| v6_13(void) = ReturnValue : &:r6_12, m6_11
# 13| v6_14(void) = UnmodeledUse : mu*
# 13| v6_15(void) = ExitFunction :
@@ -95,9 +95,9 @@ ssa.cpp:
# 34| v0_4(void) = NoOp :
# 35| r0_5(glval<int>) = VariableAddress[#return] :
# 35| r0_6(int) = Constant[0] :
# 35| m0_7(int) = Store : r0_5, r0_6
# 35| m0_7(int) = Store : &:r0_5, r0_6
# 31| r0_8(glval<int>) = VariableAddress[#return] :
# 31| v0_9(void) = ReturnValue : r0_8, m0_7
# 31| v0_9(void) = ReturnValue : &:r0_8, m0_7
# 31| v0_10(void) = UnmodeledUse : mu*
# 31| v0_11(void) = ExitFunction :
@@ -107,15 +107,15 @@ ssa.cpp:
# 38| m0_1(unknown) = AliasedDefinition :
# 38| mu0_2(unknown) = UnmodeledDefinition :
# 38| r0_3(glval<bool>) = VariableAddress[b] :
# 38| m0_4(bool) = InitializeParameter[b] : r0_3
# 38| m0_4(bool) = InitializeParameter[b] : &:r0_3
# 39| r0_5(glval<int>) = VariableAddress[x] :
# 39| r0_6(int) = Constant[5] :
# 39| m0_7(int) = Store : r0_5, r0_6
# 39| m0_7(int) = Store : &:r0_5, r0_6
# 40| r0_8(glval<int>) = VariableAddress[y] :
# 40| r0_9(int) = Constant[10] :
# 40| m0_10(int) = Store : r0_8, r0_9
# 40| m0_10(int) = Store : &:r0_8, r0_9
# 41| r0_11(glval<bool>) = VariableAddress[b] :
# 41| r0_12(bool) = Load : r0_11, m0_4
# 41| r0_12(bool) = Load : &:r0_11, m0_4
# 41| v0_13(void) = ConditionalBranch : r0_12
#-----| False -> Block 4
#-----| True -> Block 2
@@ -123,15 +123,15 @@ ssa.cpp:
# 38| Block 1
# 38| m1_0(int) = Phi : from 3:m3_2, from 5:m5_2
# 38| r1_1(glval<int>) = VariableAddress[#return] :
# 38| v1_2(void) = ReturnValue : r1_1, m1_0
# 38| v1_2(void) = ReturnValue : &:r1_1, m1_0
# 38| v1_3(void) = UnmodeledUse : mu*
# 38| v1_4(void) = ExitFunction :
# 42| Block 2
# 42| r2_0(glval<int>) = VariableAddress[x] :
# 42| r2_1(int) = Load : r2_0, m0_7
# 42| r2_1(int) = Load : &:r2_0, m0_7
# 42| r2_2(glval<int>) = VariableAddress[y] :
# 42| r2_3(int) = Load : r2_2, m0_10
# 42| r2_3(int) = Load : &:r2_2, m0_10
# 42| r2_4(bool) = CompareEQ : r2_1, r2_3
# 42| v2_5(void) = ConditionalBranch : r2_4
#-----| False -> Block 3
@@ -140,14 +140,14 @@ ssa.cpp:
# 46| Block 3
# 46| r3_0(glval<int>) = VariableAddress[#return] :
# 46| r3_1(int) = Constant[0] :
# 46| m3_2(int) = Store : r3_0, r3_1
# 46| m3_2(int) = Store : &:r3_0, r3_1
#-----| Goto -> Block 1
# 50| Block 4
# 50| r4_0(glval<int>) = VariableAddress[x] :
# 50| r4_1(int) = Load : r4_0, m0_7
# 50| r4_1(int) = Load : &:r4_0, m0_7
# 50| r4_2(glval<int>) = VariableAddress[y] :
# 50| r4_3(int) = Load : r4_2, m0_10
# 50| r4_3(int) = Load : &:r4_2, m0_10
# 50| r4_4(bool) = CompareLT : r4_1, r4_3
# 50| v4_5(void) = ConditionalBranch : r4_4
#-----| False -> Block 6
@@ -156,7 +156,7 @@ ssa.cpp:
# 51| Block 5
# 51| r5_0(glval<int>) = VariableAddress[#return] :
# 51| r5_1(int) = Constant[0] :
# 51| m5_2(int) = Store : r5_0, r5_1
# 51| m5_2(int) = Store : &:r5_0, r5_1
#-----| Goto -> Block 1
# 38| Block 6
@@ -169,12 +169,12 @@ ssa.cpp:
# 59| mu0_2(unknown) = UnmodeledDefinition :
# 60| r0_3(glval<int>) = VariableAddress[i] :
# 60| r0_4(int) = Constant[0] :
# 60| m0_5(int) = Store : r0_3, r0_4
# 60| m0_5(int) = Store : &:r0_3, r0_4
# 62| r0_6(glval<int>) = VariableAddress[i] :
# 62| r0_7(int) = Load : r0_6, m0_5
# 62| r0_7(int) = Load : &:r0_6, m0_5
# 62| r0_8(int) = Constant[1] :
# 62| r0_9(int) = Add : r0_7, r0_8
# 62| m0_10(int) = Store : r0_6, r0_9
# 62| m0_10(int) = Store : &:r0_6, r0_9
# 63| r0_11(bool) = Constant[0] :
# 63| v0_12(void) = ConditionalBranch : r0_11
#-----| False -> Block 1
@@ -183,10 +183,10 @@ ssa.cpp:
# 65| Block 1
# 65| r1_0(glval<int>) = VariableAddress[#return] :
# 65| r1_1(glval<int>) = VariableAddress[i] :
# 65| r1_2(int) = Load : r1_1, m0_10
# 65| m1_3(int) = Store : r1_0, r1_2
# 65| r1_2(int) = Load : &:r1_1, m0_10
# 65| m1_3(int) = Store : &:r1_0, r1_2
# 59| r1_4(glval<int>) = VariableAddress[#return] :
# 59| v1_5(void) = ReturnValue : r1_4, m1_3
# 59| v1_5(void) = ReturnValue : &:r1_4, m1_3
# 59| v1_6(void) = UnmodeledUse : mu*
# 59| v1_7(void) = ExitFunction :
@@ -199,20 +199,20 @@ ssa.cpp:
# 68| m0_1(unknown) = AliasedDefinition :
# 68| mu0_2(unknown) = UnmodeledDefinition :
# 68| r0_3(glval<int>) = VariableAddress[n] :
# 68| m0_4(int) = InitializeParameter[n] : r0_3
# 68| m0_4(int) = InitializeParameter[n] : &:r0_3
# 68| r0_5(glval<char *>) = VariableAddress[p] :
# 68| m0_6(char *) = InitializeParameter[p] : r0_5
# 68| m0_6(char *) = InitializeParameter[p] : &:r0_5
#-----| Goto -> Block 3
# 70| Block 1
# 70| r1_0(char) = Constant[0] :
# 70| r1_1(glval<char *>) = VariableAddress[p] :
# 70| r1_2(char *) = Load : r1_1, m3_2
# 70| r1_2(char *) = Load : &:r1_1, m3_2
# 70| r1_3(int) = Constant[1] :
# 70| r1_4(char *) = PointerAdd[1] : r1_2, r1_3
# 70| m1_5(char *) = Store : r1_1, r1_4
# 70| m1_6(char) = Store : r1_2, r1_0
# 70| m1_7(unknown) = Chi : m3_0, m1_6
# 70| m1_5(char *) = Store : &:r1_1, r1_4
# 70| m1_6(char) = Store : &:r1_2, r1_0
# 70| m1_7(unknown) = Chi : total:m3_0, partial:m1_6
#-----| Goto (back edge) -> Block 3
# 71| Block 2
@@ -222,173 +222,492 @@ ssa.cpp:
# 68| v2_3(void) = ExitFunction :
# 69| Block 3
# 69| m3_0(unknown) = Phi : from 0:m0_1, from 1:m1_7
# 69| m3_0(unknown) = Phi : from 0:~m0_1, from 1:~m1_7
# 69| m3_1(int) = Phi : from 0:m0_4, from 1:m3_7
# 69| m3_2(char *) = Phi : from 0:m0_6, from 1:m1_5
# 69| r3_3(glval<int>) = VariableAddress[n] :
# 69| r3_4(int) = Load : r3_3, m3_1
# 69| r3_4(int) = Load : &:r3_3, m3_1
# 69| r3_5(int) = Constant[1] :
# 69| r3_6(int) = Sub : r3_4, r3_5
# 69| m3_7(int) = Store : r3_3, r3_6
# 69| m3_7(int) = Store : &:r3_3, r3_6
# 69| r3_8(int) = Constant[0] :
# 69| r3_9(bool) = CompareGT : r3_4, r3_8
# 69| v3_10(void) = ConditionalBranch : r3_9
#-----| False -> Block 2
#-----| True -> Block 1
# 75| void MustExactlyOverlap(Point)
# 75| void ScalarPhi(bool)
# 75| Block 0
# 75| v0_0(void) = EnterFunction :
# 75| m0_1(unknown) = AliasedDefinition :
# 75| mu0_2(unknown) = UnmodeledDefinition :
# 75| r0_3(glval<Point>) = VariableAddress[a] :
# 75| m0_4(Point) = InitializeParameter[a] : r0_3
# 76| r0_5(glval<Point>) = VariableAddress[b] :
# 76| r0_6(glval<Point>) = VariableAddress[a] :
# 76| r0_7(Point) = Load : r0_6, m0_4
# 76| m0_8(Point) = Store : r0_5, r0_7
# 77| v0_9(void) = NoOp :
# 75| v0_10(void) = ReturnVoid :
# 75| v0_11(void) = UnmodeledUse : mu*
# 75| v0_12(void) = ExitFunction :
# 75| r0_3(glval<bool>) = VariableAddress[b] :
# 75| m0_4(bool) = InitializeParameter[b] : &:r0_3
# 76| r0_5(glval<int>) = VariableAddress[x] :
# 76| r0_6(int) = Constant[0] :
# 76| m0_7(int) = Store : &:r0_5, r0_6
# 77| r0_8(glval<int>) = VariableAddress[y] :
# 77| r0_9(int) = Constant[1] :
# 77| m0_10(int) = Store : &:r0_8, r0_9
# 78| r0_11(glval<int>) = VariableAddress[z] :
# 78| r0_12(int) = Constant[2] :
# 78| m0_13(int) = Store : &:r0_11, r0_12
# 79| r0_14(glval<bool>) = VariableAddress[b] :
# 79| r0_15(bool) = Load : &:r0_14, m0_4
# 79| v0_16(void) = ConditionalBranch : r0_15
#-----| False -> Block 2
#-----| True -> Block 1
# 79| void MustExactlyOverlapEscaped(Point)
# 79| Block 0
# 79| v0_0(void) = EnterFunction :
# 79| m0_1(unknown) = AliasedDefinition :
# 79| mu0_2(unknown) = UnmodeledDefinition :
# 79| r0_3(glval<Point>) = VariableAddress[a] :
# 79| m0_4(Point) = InitializeParameter[a] : r0_3
# 79| m0_5(unknown) = Chi : m0_1, m0_4
# 80| r0_6(glval<Point>) = VariableAddress[b] :
# 80| r0_7(glval<Point>) = VariableAddress[a] :
# 80| r0_8(Point) = Load : r0_7, m0_5
# 80| m0_9(Point) = Store : r0_6, r0_8
# 81| r0_10(glval<unknown>) = FunctionAddress[Escape] :
# 81| r0_11(glval<Point>) = VariableAddress[a] :
# 81| r0_12(void *) = Convert : r0_11
# 81| v0_13(void) = Call : r0_10, r0_12
# 81| m0_14(unknown) = ^CallSideEffect : m0_5
# 81| m0_15(unknown) = Chi : m0_5, m0_14
# 82| v0_16(void) = NoOp :
# 79| v0_17(void) = ReturnVoid :
# 79| v0_18(void) = UnmodeledUse : mu*
# 79| v0_19(void) = ExitFunction :
# 80| Block 1
# 80| r1_0(int) = Constant[3] :
# 80| r1_1(glval<int>) = VariableAddress[x] :
# 80| m1_2(int) = Store : &:r1_1, r1_0
# 81| r1_3(int) = Constant[4] :
# 81| r1_4(glval<int>) = VariableAddress[y] :
# 81| m1_5(int) = Store : &:r1_4, r1_3
#-----| Goto -> Block 3
# 84| void MustTotallyOverlap(Point)
# 84| Block 0
# 84| v0_0(void) = EnterFunction :
# 84| m0_1(unknown) = AliasedDefinition :
# 84| mu0_2(unknown) = UnmodeledDefinition :
# 84| r0_3(glval<Point>) = VariableAddress[a] :
# 84| m0_4(Point) = InitializeParameter[a] : r0_3
# 85| r0_5(glval<int>) = VariableAddress[x] :
# 85| r0_6(glval<Point>) = VariableAddress[a] :
# 85| r0_7(glval<int>) = FieldAddress[x] : r0_6
# 85| r0_8(int) = Load : r0_7, m0_4
# 85| m0_9(int) = Store : r0_5, r0_8
# 86| r0_10(glval<int>) = VariableAddress[y] :
# 86| r0_11(glval<Point>) = VariableAddress[a] :
# 86| r0_12(glval<int>) = FieldAddress[y] : r0_11
# 86| r0_13(int) = Load : r0_12, m0_4
# 86| m0_14(int) = Store : r0_10, r0_13
# 87| v0_15(void) = NoOp :
# 84| v0_16(void) = ReturnVoid :
# 84| v0_17(void) = UnmodeledUse : mu*
# 84| v0_18(void) = ExitFunction :
# 84| Block 2
# 84| r2_0(int) = Constant[5] :
# 84| r2_1(glval<int>) = VariableAddress[x] :
# 84| m2_2(int) = Store : &:r2_1, r2_0
#-----| Goto -> Block 3
# 89| void MustTotallyOverlapEscaped(Point)
# 89| Block 0
# 89| v0_0(void) = EnterFunction :
# 89| m0_1(unknown) = AliasedDefinition :
# 89| mu0_2(unknown) = UnmodeledDefinition :
# 89| r0_3(glval<Point>) = VariableAddress[a] :
# 89| m0_4(Point) = InitializeParameter[a] : r0_3
# 89| m0_5(unknown) = Chi : m0_1, m0_4
# 90| r0_6(glval<int>) = VariableAddress[x] :
# 90| r0_7(glval<Point>) = VariableAddress[a] :
# 90| r0_8(glval<int>) = FieldAddress[x] : r0_7
# 90| r0_9(int) = Load : r0_8, m0_5
# 90| m0_10(int) = Store : r0_6, r0_9
# 91| r0_11(glval<int>) = VariableAddress[y] :
# 91| r0_12(glval<Point>) = VariableAddress[a] :
# 91| r0_13(glval<int>) = FieldAddress[y] : r0_12
# 91| r0_14(int) = Load : r0_13, m0_5
# 91| m0_15(int) = Store : r0_11, r0_14
# 92| r0_16(glval<unknown>) = FunctionAddress[Escape] :
# 92| r0_17(glval<Point>) = VariableAddress[a] :
# 92| r0_18(void *) = Convert : r0_17
# 92| v0_19(void) = Call : r0_16, r0_18
# 92| m0_20(unknown) = ^CallSideEffect : m0_5
# 92| m0_21(unknown) = Chi : m0_5, m0_20
# 93| v0_22(void) = NoOp :
# 89| v0_23(void) = ReturnVoid :
# 89| v0_24(void) = UnmodeledUse : mu*
# 89| v0_25(void) = ExitFunction :
# 86| Block 3
# 86| m3_0(int) = Phi : from 1:m1_2, from 2:m2_2
# 86| m3_1(int) = Phi : from 1:m1_5, from 2:m0_10
# 86| r3_2(glval<int>) = VariableAddress[x_merge] :
# 86| r3_3(glval<int>) = VariableAddress[x] :
# 86| r3_4(int) = Load : &:r3_3, m3_0
# 86| m3_5(int) = Store : &:r3_2, r3_4
# 87| r3_6(glval<int>) = VariableAddress[y_merge] :
# 87| r3_7(glval<int>) = VariableAddress[y] :
# 87| r3_8(int) = Load : &:r3_7, m3_1
# 87| m3_9(int) = Store : &:r3_6, r3_8
# 88| r3_10(glval<int>) = VariableAddress[z_merge] :
# 88| r3_11(glval<int>) = VariableAddress[z] :
# 88| r3_12(int) = Load : &:r3_11, m0_13
# 88| m3_13(int) = Store : &:r3_10, r3_12
# 89| v3_14(void) = NoOp :
# 75| v3_15(void) = ReturnVoid :
# 75| v3_16(void) = UnmodeledUse : mu*
# 75| v3_17(void) = ExitFunction :
# 95| void MayPartiallyOverlap(int, int)
# 91| void MustExactlyOverlap(Point)
# 91| Block 0
# 91| v0_0(void) = EnterFunction :
# 91| m0_1(unknown) = AliasedDefinition :
# 91| mu0_2(unknown) = UnmodeledDefinition :
# 91| r0_3(glval<Point>) = VariableAddress[a] :
# 91| m0_4(Point) = InitializeParameter[a] : &:r0_3
# 92| r0_5(glval<Point>) = VariableAddress[b] :
# 92| r0_6(glval<Point>) = VariableAddress[a] :
# 92| r0_7(Point) = Load : &:r0_6, m0_4
# 92| m0_8(Point) = Store : &:r0_5, r0_7
# 93| v0_9(void) = NoOp :
# 91| v0_10(void) = ReturnVoid :
# 91| v0_11(void) = UnmodeledUse : mu*
# 91| v0_12(void) = ExitFunction :
# 95| void MustExactlyOverlapEscaped(Point)
# 95| Block 0
# 95| v0_0(void) = EnterFunction :
# 95| m0_1(unknown) = AliasedDefinition :
# 95| mu0_2(unknown) = UnmodeledDefinition :
# 95| r0_3(glval<int>) = VariableAddress[x] :
# 95| m0_4(int) = InitializeParameter[x] : r0_3
# 95| r0_5(glval<int>) = VariableAddress[y] :
# 95| m0_6(int) = InitializeParameter[y] : r0_5
# 96| r0_7(glval<Point>) = VariableAddress[a] :
# 96| m0_8(Point) = Uninitialized[a] : r0_7
# 96| r0_9(glval<int>) = FieldAddress[x] : r0_7
# 96| r0_10(glval<int>) = VariableAddress[x] :
# 96| r0_11(int) = Load : r0_10, m0_4
# 96| m0_12(int) = Store : r0_9, r0_11
# 96| m0_13(Point) = Chi : m0_8, m0_12
# 96| r0_14(glval<int>) = FieldAddress[y] : r0_7
# 96| r0_15(glval<int>) = VariableAddress[y] :
# 96| r0_16(int) = Load : r0_15, m0_6
# 96| m0_17(int) = Store : r0_14, r0_16
# 96| m0_18(Point) = Chi : m0_13, m0_17
# 97| r0_19(glval<Point>) = VariableAddress[b] :
# 97| r0_20(glval<Point>) = VariableAddress[a] :
# 97| r0_21(Point) = Load : r0_20, m0_18
# 97| m0_22(Point) = Store : r0_19, r0_21
# 98| v0_23(void) = NoOp :
# 95| v0_24(void) = ReturnVoid :
# 95| v0_25(void) = UnmodeledUse : mu*
# 95| v0_26(void) = ExitFunction :
# 95| v0_0(void) = EnterFunction :
# 95| m0_1(unknown) = AliasedDefinition :
# 95| mu0_2(unknown) = UnmodeledDefinition :
# 95| r0_3(glval<Point>) = VariableAddress[a] :
# 95| m0_4(Point) = InitializeParameter[a] : &:r0_3
# 95| m0_5(unknown) = Chi : total:m0_1, partial:m0_4
# 96| r0_6(glval<Point>) = VariableAddress[b] :
# 96| r0_7(glval<Point>) = VariableAddress[a] :
# 96| r0_8(Point) = Load : &:r0_7, m0_4
# 96| m0_9(Point) = Store : &:r0_6, r0_8
# 97| r0_10(glval<unknown>) = FunctionAddress[Escape] :
# 97| r0_11(glval<Point>) = VariableAddress[a] :
# 97| r0_12(void *) = Convert : r0_11
# 97| v0_13(void) = Call : func:r0_10, 0:r0_12
# 97| m0_14(unknown) = ^CallSideEffect : ~m0_5
# 97| m0_15(unknown) = Chi : total:m0_5, partial:m0_14
# 98| v0_16(void) = NoOp :
# 95| v0_17(void) = ReturnVoid :
# 95| v0_18(void) = UnmodeledUse : mu*
# 95| v0_19(void) = ExitFunction :
# 100| void MayPartiallyOverlapEscaped(int, int)
# 100| void MustTotallyOverlap(Point)
# 100| Block 0
# 100| v0_0(void) = EnterFunction :
# 100| m0_1(unknown) = AliasedDefinition :
# 100| mu0_2(unknown) = UnmodeledDefinition :
# 100| r0_3(glval<int>) = VariableAddress[x] :
# 100| m0_4(int) = InitializeParameter[x] : r0_3
# 100| r0_5(glval<int>) = VariableAddress[y] :
# 100| m0_6(int) = InitializeParameter[y] : r0_5
# 101| r0_7(glval<Point>) = VariableAddress[a] :
# 101| m0_8(Point) = Uninitialized[a] : r0_7
# 101| m0_9(unknown) = Chi : m0_1, m0_8
# 101| r0_10(glval<int>) = FieldAddress[x] : r0_7
# 101| r0_11(glval<int>) = VariableAddress[x] :
# 101| r0_12(int) = Load : r0_11, m0_4
# 101| m0_13(int) = Store : r0_10, r0_12
# 101| m0_14(unknown) = Chi : m0_9, m0_13
# 101| r0_15(glval<int>) = FieldAddress[y] : r0_7
# 101| r0_16(glval<int>) = VariableAddress[y] :
# 101| r0_17(int) = Load : r0_16, m0_6
# 101| m0_18(int) = Store : r0_15, r0_17
# 101| m0_19(unknown) = Chi : m0_14, m0_18
# 102| r0_20(glval<Point>) = VariableAddress[b] :
# 102| r0_21(glval<Point>) = VariableAddress[a] :
# 102| r0_22(Point) = Load : r0_21, m0_19
# 102| m0_23(Point) = Store : r0_20, r0_22
# 103| r0_24(glval<unknown>) = FunctionAddress[Escape] :
# 103| r0_25(glval<Point>) = VariableAddress[a] :
# 103| r0_26(void *) = Convert : r0_25
# 103| v0_27(void) = Call : r0_24, r0_26
# 103| m0_28(unknown) = ^CallSideEffect : m0_19
# 103| m0_29(unknown) = Chi : m0_19, m0_28
# 104| v0_30(void) = NoOp :
# 100| v0_31(void) = ReturnVoid :
# 100| v0_32(void) = UnmodeledUse : mu*
# 100| v0_33(void) = ExitFunction :
# 100| v0_0(void) = EnterFunction :
# 100| m0_1(unknown) = AliasedDefinition :
# 100| mu0_2(unknown) = UnmodeledDefinition :
# 100| r0_3(glval<Point>) = VariableAddress[a] :
# 100| m0_4(Point) = InitializeParameter[a] : &:r0_3
# 101| r0_5(glval<int>) = VariableAddress[x] :
# 101| r0_6(glval<Point>) = VariableAddress[a] :
# 101| r0_7(glval<int>) = FieldAddress[x] : r0_6
# 101| r0_8(int) = Load : &:r0_7, ~m0_4
# 101| m0_9(int) = Store : &:r0_5, r0_8
# 102| r0_10(glval<int>) = VariableAddress[y] :
# 102| r0_11(glval<Point>) = VariableAddress[a] :
# 102| r0_12(glval<int>) = FieldAddress[y] : r0_11
# 102| r0_13(int) = Load : &:r0_12, ~m0_4
# 102| m0_14(int) = Store : &:r0_10, r0_13
# 103| v0_15(void) = NoOp :
# 100| v0_16(void) = ReturnVoid :
# 100| v0_17(void) = UnmodeledUse : mu*
# 100| v0_18(void) = ExitFunction :
# 105| void MustTotallyOverlapEscaped(Point)
# 105| Block 0
# 105| v0_0(void) = EnterFunction :
# 105| m0_1(unknown) = AliasedDefinition :
# 105| mu0_2(unknown) = UnmodeledDefinition :
# 105| r0_3(glval<Point>) = VariableAddress[a] :
# 105| m0_4(Point) = InitializeParameter[a] : &:r0_3
# 105| m0_5(unknown) = Chi : total:m0_1, partial:m0_4
# 106| r0_6(glval<int>) = VariableAddress[x] :
# 106| r0_7(glval<Point>) = VariableAddress[a] :
# 106| r0_8(glval<int>) = FieldAddress[x] : r0_7
# 106| r0_9(int) = Load : &:r0_8, ~m0_4
# 106| m0_10(int) = Store : &:r0_6, r0_9
# 107| r0_11(glval<int>) = VariableAddress[y] :
# 107| r0_12(glval<Point>) = VariableAddress[a] :
# 107| r0_13(glval<int>) = FieldAddress[y] : r0_12
# 107| r0_14(int) = Load : &:r0_13, ~m0_4
# 107| m0_15(int) = Store : &:r0_11, r0_14
# 108| r0_16(glval<unknown>) = FunctionAddress[Escape] :
# 108| r0_17(glval<Point>) = VariableAddress[a] :
# 108| r0_18(void *) = Convert : r0_17
# 108| v0_19(void) = Call : func:r0_16, 0:r0_18
# 108| m0_20(unknown) = ^CallSideEffect : ~m0_5
# 108| m0_21(unknown) = Chi : total:m0_5, partial:m0_20
# 109| v0_22(void) = NoOp :
# 105| v0_23(void) = ReturnVoid :
# 105| v0_24(void) = UnmodeledUse : mu*
# 105| v0_25(void) = ExitFunction :
# 111| void MayPartiallyOverlap(int, int)
# 111| Block 0
# 111| v0_0(void) = EnterFunction :
# 111| m0_1(unknown) = AliasedDefinition :
# 111| mu0_2(unknown) = UnmodeledDefinition :
# 111| r0_3(glval<int>) = VariableAddress[x] :
# 111| m0_4(int) = InitializeParameter[x] : &:r0_3
# 111| r0_5(glval<int>) = VariableAddress[y] :
# 111| m0_6(int) = InitializeParameter[y] : &:r0_5
# 112| r0_7(glval<Point>) = VariableAddress[a] :
# 112| m0_8(Point) = Uninitialized[a] : &:r0_7
# 112| r0_9(glval<int>) = FieldAddress[x] : r0_7
# 112| r0_10(glval<int>) = VariableAddress[x] :
# 112| r0_11(int) = Load : &:r0_10, m0_4
# 112| m0_12(int) = Store : &:r0_9, r0_11
# 112| m0_13(Point) = Chi : total:m0_8, partial:m0_12
# 112| r0_14(glval<int>) = FieldAddress[y] : r0_7
# 112| r0_15(glval<int>) = VariableAddress[y] :
# 112| r0_16(int) = Load : &:r0_15, m0_6
# 112| m0_17(int) = Store : &:r0_14, r0_16
# 112| m0_18(Point) = Chi : total:m0_13, partial:m0_17
# 113| r0_19(glval<Point>) = VariableAddress[b] :
# 113| r0_20(glval<Point>) = VariableAddress[a] :
# 113| r0_21(Point) = Load : &:r0_20, ~m0_18
# 113| m0_22(Point) = Store : &:r0_19, r0_21
# 114| v0_23(void) = NoOp :
# 111| v0_24(void) = ReturnVoid :
# 111| v0_25(void) = UnmodeledUse : mu*
# 111| v0_26(void) = ExitFunction :
# 116| void MayPartiallyOverlapEscaped(int, int)
# 116| Block 0
# 116| v0_0(void) = EnterFunction :
# 116| m0_1(unknown) = AliasedDefinition :
# 116| mu0_2(unknown) = UnmodeledDefinition :
# 116| r0_3(glval<int>) = VariableAddress[x] :
# 116| m0_4(int) = InitializeParameter[x] : &:r0_3
# 116| r0_5(glval<int>) = VariableAddress[y] :
# 116| m0_6(int) = InitializeParameter[y] : &:r0_5
# 117| r0_7(glval<Point>) = VariableAddress[a] :
# 117| m0_8(Point) = Uninitialized[a] : &:r0_7
# 117| m0_9(unknown) = Chi : total:m0_1, partial:m0_8
# 117| r0_10(glval<int>) = FieldAddress[x] : r0_7
# 117| r0_11(glval<int>) = VariableAddress[x] :
# 117| r0_12(int) = Load : &:r0_11, m0_4
# 117| m0_13(int) = Store : &:r0_10, r0_12
# 117| m0_14(unknown) = Chi : total:m0_9, partial:m0_13
# 117| r0_15(glval<int>) = FieldAddress[y] : r0_7
# 117| r0_16(glval<int>) = VariableAddress[y] :
# 117| r0_17(int) = Load : &:r0_16, m0_6
# 117| m0_18(int) = Store : &:r0_15, r0_17
# 117| m0_19(unknown) = Chi : total:m0_14, partial:m0_18
# 118| r0_20(glval<Point>) = VariableAddress[b] :
# 118| r0_21(glval<Point>) = VariableAddress[a] :
# 118| r0_22(Point) = Load : &:r0_21, ~m0_19
# 118| m0_23(Point) = Store : &:r0_20, r0_22
# 119| r0_24(glval<unknown>) = FunctionAddress[Escape] :
# 119| r0_25(glval<Point>) = VariableAddress[a] :
# 119| r0_26(void *) = Convert : r0_25
# 119| v0_27(void) = Call : func:r0_24, 0:r0_26
# 119| m0_28(unknown) = ^CallSideEffect : ~m0_19
# 119| m0_29(unknown) = Chi : total:m0_19, partial:m0_28
# 120| v0_30(void) = NoOp :
# 116| v0_31(void) = ReturnVoid :
# 116| v0_32(void) = UnmodeledUse : mu*
# 116| v0_33(void) = ExitFunction :
# 122| void MergeMustExactlyOverlap(bool, int, int)
# 122| Block 0
# 122| v0_0(void) = EnterFunction :
# 122| m0_1(unknown) = AliasedDefinition :
# 122| mu0_2(unknown) = UnmodeledDefinition :
# 122| r0_3(glval<bool>) = VariableAddress[c] :
# 122| m0_4(bool) = InitializeParameter[c] : &:r0_3
# 122| r0_5(glval<int>) = VariableAddress[x1] :
# 122| m0_6(int) = InitializeParameter[x1] : &:r0_5
# 122| r0_7(glval<int>) = VariableAddress[x2] :
# 122| m0_8(int) = InitializeParameter[x2] : &:r0_7
# 123| r0_9(glval<Point>) = VariableAddress[a] :
# 123| m0_10(Point) = Uninitialized[a] : &:r0_9
# 123| r0_11(glval<int>) = FieldAddress[x] : r0_9
# 123| r0_12(int) = Constant[0] :
# 123| m0_13(int) = Store : &:r0_11, r0_12
# 123| m0_14(Point) = Chi : total:m0_10, partial:m0_13
# 123| r0_15(glval<int>) = FieldAddress[y] : r0_9
# 123| r0_16(int) = Constant[0] :
# 123| m0_17(int) = Store : &:r0_15, r0_16
# 123| m0_18(Point) = Chi : total:m0_14, partial:m0_17
# 124| r0_19(glval<bool>) = VariableAddress[c] :
# 124| r0_20(bool) = Load : &:r0_19, m0_4
# 124| v0_21(void) = ConditionalBranch : r0_20
#-----| False -> Block 2
#-----| True -> Block 1
# 125| Block 1
# 125| r1_0(glval<int>) = VariableAddress[x1] :
# 125| r1_1(int) = Load : &:r1_0, m0_6
# 125| r1_2(glval<Point>) = VariableAddress[a] :
# 125| r1_3(glval<int>) = FieldAddress[x] : r1_2
# 125| m1_4(int) = Store : &:r1_3, r1_1
# 125| m1_5(Point) = Chi : total:m0_18, partial:m1_4
#-----| Goto -> Block 3
# 128| Block 2
# 128| r2_0(glval<int>) = VariableAddress[x2] :
# 128| r2_1(int) = Load : &:r2_0, m0_8
# 128| r2_2(glval<Point>) = VariableAddress[a] :
# 128| r2_3(glval<int>) = FieldAddress[x] : r2_2
# 128| m2_4(int) = Store : &:r2_3, r2_1
# 128| m2_5(Point) = Chi : total:m0_18, partial:m2_4
#-----| Goto -> Block 3
# 130| Block 3
# 130| m3_0(Point) = Phi : from 1:~m1_5, from 2:~m2_5
# 130| m3_1(int) = Phi : from 1:m1_4, from 2:m2_4
# 130| r3_2(glval<int>) = VariableAddress[x] :
# 130| r3_3(glval<Point>) = VariableAddress[a] :
# 130| r3_4(glval<int>) = FieldAddress[x] : r3_3
# 130| r3_5(int) = Load : &:r3_4, m3_1
# 130| m3_6(int) = Store : &:r3_2, r3_5
# 131| r3_7(glval<Point>) = VariableAddress[b] :
# 131| r3_8(glval<Point>) = VariableAddress[a] :
# 131| r3_9(Point) = Load : &:r3_8, m3_0
# 131| m3_10(Point) = Store : &:r3_7, r3_9
# 132| v3_11(void) = NoOp :
# 122| v3_12(void) = ReturnVoid :
# 122| v3_13(void) = UnmodeledUse : mu*
# 122| v3_14(void) = ExitFunction :
# 134| void MergeMustExactlyWithMustTotallyOverlap(bool, Point, int)
# 134| Block 0
# 134| v0_0(void) = EnterFunction :
# 134| m0_1(unknown) = AliasedDefinition :
# 134| mu0_2(unknown) = UnmodeledDefinition :
# 134| r0_3(glval<bool>) = VariableAddress[c] :
# 134| m0_4(bool) = InitializeParameter[c] : &:r0_3
# 134| r0_5(glval<Point>) = VariableAddress[p] :
# 134| m0_6(Point) = InitializeParameter[p] : &:r0_5
# 134| r0_7(glval<int>) = VariableAddress[x1] :
# 134| m0_8(int) = InitializeParameter[x1] : &:r0_7
# 135| r0_9(glval<Point>) = VariableAddress[a] :
# 135| m0_10(Point) = Uninitialized[a] : &:r0_9
# 135| r0_11(glval<int>) = FieldAddress[x] : r0_9
# 135| r0_12(int) = Constant[0] :
# 135| m0_13(int) = Store : &:r0_11, r0_12
# 135| m0_14(Point) = Chi : total:m0_10, partial:m0_13
# 135| r0_15(glval<int>) = FieldAddress[y] : r0_9
# 135| r0_16(int) = Constant[0] :
# 135| m0_17(int) = Store : &:r0_15, r0_16
# 135| m0_18(Point) = Chi : total:m0_14, partial:m0_17
# 136| r0_19(glval<bool>) = VariableAddress[c] :
# 136| r0_20(bool) = Load : &:r0_19, m0_4
# 136| v0_21(void) = ConditionalBranch : r0_20
#-----| False -> Block 2
#-----| True -> Block 1
# 137| Block 1
# 137| r1_0(glval<int>) = VariableAddress[x1] :
# 137| r1_1(int) = Load : &:r1_0, m0_8
# 137| r1_2(glval<Point>) = VariableAddress[a] :
# 137| r1_3(glval<int>) = FieldAddress[x] : r1_2
# 137| m1_4(int) = Store : &:r1_3, r1_1
# 137| m1_5(Point) = Chi : total:m0_18, partial:m1_4
#-----| Goto -> Block 3
# 140| Block 2
# 140| r2_0(glval<Point>) = VariableAddress[p] :
# 140| r2_1(Point) = Load : &:r2_0, m0_6
# 140| r2_2(glval<Point>) = VariableAddress[a] :
# 140| m2_3(Point) = Store : &:r2_2, r2_1
#-----| Goto -> Block 3
# 142| Block 3
# 142| m3_0(Point) = Phi : from 1:~m1_5, from 2:m2_3
# 142| m3_1(int) = Phi : from 1:m1_4, from 2:~m2_3
# 142| r3_2(glval<int>) = VariableAddress[x] :
# 142| r3_3(glval<Point>) = VariableAddress[a] :
# 142| r3_4(glval<int>) = FieldAddress[x] : r3_3
# 142| r3_5(int) = Load : &:r3_4, m3_1
# 142| m3_6(int) = Store : &:r3_2, r3_5
# 143| v3_7(void) = NoOp :
# 134| v3_8(void) = ReturnVoid :
# 134| v3_9(void) = UnmodeledUse : mu*
# 134| v3_10(void) = ExitFunction :
# 145| void MergeMustExactlyWithMayPartiallyOverlap(bool, Point, int)
# 145| Block 0
# 145| v0_0(void) = EnterFunction :
# 145| m0_1(unknown) = AliasedDefinition :
# 145| mu0_2(unknown) = UnmodeledDefinition :
# 145| r0_3(glval<bool>) = VariableAddress[c] :
# 145| m0_4(bool) = InitializeParameter[c] : &:r0_3
# 145| r0_5(glval<Point>) = VariableAddress[p] :
# 145| m0_6(Point) = InitializeParameter[p] : &:r0_5
# 145| r0_7(glval<int>) = VariableAddress[x1] :
# 145| m0_8(int) = InitializeParameter[x1] : &:r0_7
# 146| r0_9(glval<Point>) = VariableAddress[a] :
# 146| m0_10(Point) = Uninitialized[a] : &:r0_9
# 146| r0_11(glval<int>) = FieldAddress[x] : r0_9
# 146| r0_12(int) = Constant[0] :
# 146| m0_13(int) = Store : &:r0_11, r0_12
# 146| m0_14(Point) = Chi : total:m0_10, partial:m0_13
# 146| r0_15(glval<int>) = FieldAddress[y] : r0_9
# 146| r0_16(int) = Constant[0] :
# 146| m0_17(int) = Store : &:r0_15, r0_16
# 146| m0_18(Point) = Chi : total:m0_14, partial:m0_17
# 147| r0_19(glval<bool>) = VariableAddress[c] :
# 147| r0_20(bool) = Load : &:r0_19, m0_4
# 147| v0_21(void) = ConditionalBranch : r0_20
#-----| False -> Block 2
#-----| True -> Block 1
# 148| Block 1
# 148| r1_0(glval<int>) = VariableAddress[x1] :
# 148| r1_1(int) = Load : &:r1_0, m0_8
# 148| r1_2(glval<Point>) = VariableAddress[a] :
# 148| r1_3(glval<int>) = FieldAddress[x] : r1_2
# 148| m1_4(int) = Store : &:r1_3, r1_1
# 148| m1_5(Point) = Chi : total:m0_18, partial:m1_4
#-----| Goto -> Block 3
# 151| Block 2
# 151| r2_0(glval<Point>) = VariableAddress[p] :
# 151| r2_1(Point) = Load : &:r2_0, m0_6
# 151| r2_2(glval<Point>) = VariableAddress[a] :
# 151| m2_3(Point) = Store : &:r2_2, r2_1
#-----| Goto -> Block 3
# 153| Block 3
# 153| m3_0(Point) = Phi : from 1:~m1_5, from 2:m2_3
# 153| r3_1(glval<Point>) = VariableAddress[b] :
# 153| r3_2(glval<Point>) = VariableAddress[a] :
# 153| r3_3(Point) = Load : &:r3_2, m3_0
# 153| m3_4(Point) = Store : &:r3_1, r3_3
# 154| v3_5(void) = NoOp :
# 145| v3_6(void) = ReturnVoid :
# 145| v3_7(void) = UnmodeledUse : mu*
# 145| v3_8(void) = ExitFunction :
# 156| void MergeMustTotallyOverlapWithMayPartiallyOverlap(bool, Rect, int)
# 156| Block 0
# 156| v0_0(void) = EnterFunction :
# 156| m0_1(unknown) = AliasedDefinition :
# 156| mu0_2(unknown) = UnmodeledDefinition :
# 156| r0_3(glval<bool>) = VariableAddress[c] :
# 156| m0_4(bool) = InitializeParameter[c] : &:r0_3
# 156| r0_5(glval<Rect>) = VariableAddress[r] :
# 156| m0_6(Rect) = InitializeParameter[r] : &:r0_5
# 156| r0_7(glval<int>) = VariableAddress[x1] :
# 156| m0_8(int) = InitializeParameter[x1] : &:r0_7
# 157| r0_9(glval<Rect>) = VariableAddress[a] :
# 157| m0_10(Rect) = Uninitialized[a] : &:r0_9
# 157| r0_11(glval<Point>) = FieldAddress[topLeft] : r0_9
# 157| r0_12(Point) = Constant[0] :
# 157| m0_13(Point) = Store : &:r0_11, r0_12
# 157| m0_14(Rect) = Chi : total:m0_10, partial:m0_13
# 157| r0_15(glval<Point>) = FieldAddress[bottomRight] : r0_9
# 157| r0_16(Point) = Constant[0] :
# 157| m0_17(Point) = Store : &:r0_15, r0_16
# 157| m0_18(Rect) = Chi : total:m0_14, partial:m0_17
# 158| r0_19(glval<bool>) = VariableAddress[c] :
# 158| r0_20(bool) = Load : &:r0_19, m0_4
# 158| v0_21(void) = ConditionalBranch : r0_20
#-----| False -> Block 2
#-----| True -> Block 1
# 159| Block 1
# 159| r1_0(glval<int>) = VariableAddress[x1] :
# 159| r1_1(int) = Load : &:r1_0, m0_8
# 159| r1_2(glval<Rect>) = VariableAddress[a] :
# 159| r1_3(glval<Point>) = FieldAddress[topLeft] : r1_2
# 159| r1_4(glval<int>) = FieldAddress[x] : r1_3
# 159| m1_5(int) = Store : &:r1_4, r1_1
# 159| m1_6(Rect) = Chi : total:m0_18, partial:m1_5
#-----| Goto -> Block 3
# 162| Block 2
# 162| r2_0(glval<Rect>) = VariableAddress[r] :
# 162| r2_1(Rect) = Load : &:r2_0, m0_6
# 162| r2_2(glval<Rect>) = VariableAddress[a] :
# 162| m2_3(Rect) = Store : &:r2_2, r2_1
#-----| Goto -> Block 3
# 164| Block 3
# 164| m3_0(Rect) = Phi : from 1:~m1_6, from 2:m2_3
# 164| r3_1(glval<Point>) = VariableAddress[b] :
# 164| r3_2(glval<Rect>) = VariableAddress[a] :
# 164| r3_3(glval<Point>) = FieldAddress[topLeft] : r3_2
# 164| r3_4(Point) = Load : &:r3_3, ~m3_0
# 164| m3_5(Point) = Store : &:r3_1, r3_4
# 165| v3_6(void) = NoOp :
# 156| v3_7(void) = ReturnVoid :
# 156| v3_8(void) = UnmodeledUse : mu*
# 156| v3_9(void) = ExitFunction :
# 171| void WrapperStruct(Wrapper)
# 171| Block 0
# 171| v0_0(void) = EnterFunction :
# 171| m0_1(unknown) = AliasedDefinition :
# 171| mu0_2(unknown) = UnmodeledDefinition :
# 171| r0_3(glval<Wrapper>) = VariableAddress[w] :
# 171| m0_4(Wrapper) = InitializeParameter[w] : &:r0_3
# 172| r0_5(glval<Wrapper>) = VariableAddress[x] :
# 172| r0_6(glval<Wrapper>) = VariableAddress[w] :
# 172| r0_7(Wrapper) = Load : &:r0_6, m0_4
# 172| m0_8(Wrapper) = Store : &:r0_5, r0_7
# 173| r0_9(glval<int>) = VariableAddress[a] :
# 173| r0_10(glval<Wrapper>) = VariableAddress[w] :
# 173| r0_11(glval<int>) = FieldAddress[f] : r0_10
# 173| r0_12(int) = Load : &:r0_11, ~m0_4
# 173| m0_13(int) = Store : &:r0_9, r0_12
# 174| r0_14(int) = Constant[5] :
# 174| r0_15(glval<Wrapper>) = VariableAddress[w] :
# 174| r0_16(glval<int>) = FieldAddress[f] : r0_15
# 174| m0_17(int) = Store : &:r0_16, r0_14
# 175| r0_18(glval<Wrapper>) = VariableAddress[w] :
# 175| r0_19(glval<int>) = FieldAddress[f] : r0_18
# 175| r0_20(int) = Load : &:r0_19, m0_17
# 175| r0_21(glval<int>) = VariableAddress[a] :
# 175| m0_22(int) = Store : &:r0_21, r0_20
# 176| r0_23(glval<Wrapper>) = VariableAddress[w] :
# 176| r0_24(Wrapper) = Load : &:r0_23, ~m0_17
# 176| r0_25(glval<Wrapper>) = VariableAddress[x] :
# 176| m0_26(Wrapper) = Store : &:r0_25, r0_24
# 177| v0_27(void) = NoOp :
# 171| v0_28(void) = ReturnVoid :
# 171| v0_29(void) = UnmodeledUse : mu*
# 171| v0_30(void) = ExitFunction :

View File

@@ -72,6 +72,22 @@ void chiNodeAtEndOfLoop(int n, char* p) {
void Escape(void* p);
void ScalarPhi(bool b) {
int x = 0;
int y = 1;
int z = 2;
if (b) {
x = 3;
y = 4;
}
else {
x = 5;
}
int x_merge = x;
int y_merge = y;
int z_merge = z;
}
void MustExactlyOverlap(Point a) {
Point b = a;
}
@@ -102,3 +118,60 @@ void MayPartiallyOverlapEscaped(int x, int y) {
Point b = a;
Escape(&a);
}
void MergeMustExactlyOverlap(bool c, int x1, int x2) {
Point a = {};
if (c) {
a.x = x1;
}
else {
a.x = x2;
}
int x = a.x; // Both reaching defs must exactly overlap.
Point b = a;
}
void MergeMustExactlyWithMustTotallyOverlap(bool c, Point p, int x1) {
Point a = {};
if (c) {
a.x = x1;
}
else {
a = p;
}
int x = a.x; // Only one reaching def must exactly overlap, but we should still get a Phi for it.
}
void MergeMustExactlyWithMayPartiallyOverlap(bool c, Point p, int x1) {
Point a = {};
if (c) {
a.x = x1;
}
else {
a = p;
}
Point b = a; // Only one reaching def must exactly overlap, but we should still get a Phi for it.
}
void MergeMustTotallyOverlapWithMayPartiallyOverlap(bool c, Rect r, int x1) {
Rect a = {};
if (c) {
a.topLeft.x = x1;
}
else {
a = r;
}
Point b = a.topLeft; // Neither reaching def must exactly overlap, so we'll just get a Phi of the virtual variable.
}
struct Wrapper {
int f;
};
void WrapperStruct(Wrapper w) {
Wrapper x = w; // MustExactlyOverlap
int a = w.f; // MustTotallyOverlap, because the types don't match
w.f = 5;
a = w.f; // MustExactlyOverlap
x = w; // MustTotallyOverlap
}

View File

@@ -5,78 +5,78 @@ ssa.cpp:
# 13| mu0_1(unknown) = AliasedDefinition :
# 13| mu0_2(unknown) = UnmodeledDefinition :
# 13| r0_3(glval<Point *>) = VariableAddress[p] :
# 13| m0_4(Point *) = InitializeParameter[p] : r0_3
# 13| m0_4(Point *) = InitializeParameter[p] : &:r0_3
# 13| r0_5(glval<bool>) = VariableAddress[which1] :
# 13| m0_6(bool) = InitializeParameter[which1] : r0_5
# 13| m0_6(bool) = InitializeParameter[which1] : &:r0_5
# 13| r0_7(glval<bool>) = VariableAddress[which2] :
# 13| m0_8(bool) = InitializeParameter[which2] : r0_7
# 13| m0_8(bool) = InitializeParameter[which2] : &:r0_7
# 14| r0_9(glval<bool>) = VariableAddress[which1] :
# 14| r0_10(bool) = Load : r0_9, m0_6
# 14| r0_10(bool) = Load : &:r0_9, m0_6
# 14| v0_11(void) = ConditionalBranch : r0_10
#-----| False -> Block 2
#-----| True -> Block 1
# 15| Block 1
# 15| r1_0(glval<Point *>) = VariableAddress[p] :
# 15| r1_1(Point *) = Load : r1_0, m0_4
# 15| r1_1(Point *) = Load : &:r1_0, m0_4
# 15| r1_2(glval<int>) = FieldAddress[x] : r1_1
# 15| r1_3(int) = Load : r1_2, mu0_2
# 15| r1_3(int) = Load : &:r1_2, ~mu0_2
# 15| r1_4(int) = Constant[1] :
# 15| r1_5(int) = Add : r1_3, r1_4
# 15| mu1_6(int) = Store : r1_2, r1_5
# 15| mu1_6(int) = Store : &:r1_2, r1_5
#-----| Goto -> Block 3
# 18| Block 2
# 18| r2_0(glval<Point *>) = VariableAddress[p] :
# 18| r2_1(Point *) = Load : r2_0, m0_4
# 18| r2_1(Point *) = Load : &:r2_0, m0_4
# 18| r2_2(glval<int>) = FieldAddress[y] : r2_1
# 18| r2_3(int) = Load : r2_2, mu0_2
# 18| r2_3(int) = Load : &:r2_2, ~mu0_2
# 18| r2_4(int) = Constant[1] :
# 18| r2_5(int) = Add : r2_3, r2_4
# 18| mu2_6(int) = Store : r2_2, r2_5
# 18| mu2_6(int) = Store : &:r2_2, r2_5
#-----| Goto -> Block 3
# 21| Block 3
# 21| r3_0(glval<bool>) = VariableAddress[which2] :
# 21| r3_1(bool) = Load : r3_0, m0_8
# 21| r3_1(bool) = Load : &:r3_0, m0_8
# 21| v3_2(void) = ConditionalBranch : r3_1
#-----| False -> Block 5
#-----| True -> Block 4
# 22| Block 4
# 22| r4_0(glval<Point *>) = VariableAddress[p] :
# 22| r4_1(Point *) = Load : r4_0, m0_4
# 22| r4_1(Point *) = Load : &:r4_0, m0_4
# 22| r4_2(glval<int>) = FieldAddress[x] : r4_1
# 22| r4_3(int) = Load : r4_2, mu0_2
# 22| r4_3(int) = Load : &:r4_2, ~mu0_2
# 22| r4_4(int) = Constant[1] :
# 22| r4_5(int) = Add : r4_3, r4_4
# 22| mu4_6(int) = Store : r4_2, r4_5
# 22| mu4_6(int) = Store : &:r4_2, r4_5
#-----| Goto -> Block 6
# 25| Block 5
# 25| r5_0(glval<Point *>) = VariableAddress[p] :
# 25| r5_1(Point *) = Load : r5_0, m0_4
# 25| r5_1(Point *) = Load : &:r5_0, m0_4
# 25| r5_2(glval<int>) = FieldAddress[y] : r5_1
# 25| r5_3(int) = Load : r5_2, mu0_2
# 25| r5_3(int) = Load : &:r5_2, ~mu0_2
# 25| r5_4(int) = Constant[1] :
# 25| r5_5(int) = Add : r5_3, r5_4
# 25| mu5_6(int) = Store : r5_2, r5_5
# 25| mu5_6(int) = Store : &:r5_2, r5_5
#-----| Goto -> Block 6
# 28| Block 6
# 28| r6_0(glval<int>) = VariableAddress[#return] :
# 28| r6_1(glval<Point *>) = VariableAddress[p] :
# 28| r6_2(Point *) = Load : r6_1, m0_4
# 28| r6_2(Point *) = Load : &:r6_1, m0_4
# 28| r6_3(glval<int>) = FieldAddress[x] : r6_2
# 28| r6_4(int) = Load : r6_3, mu0_2
# 28| r6_4(int) = Load : &:r6_3, ~mu0_2
# 28| r6_5(glval<Point *>) = VariableAddress[p] :
# 28| r6_6(Point *) = Load : r6_5, m0_4
# 28| r6_6(Point *) = Load : &:r6_5, m0_4
# 28| r6_7(glval<int>) = FieldAddress[y] : r6_6
# 28| r6_8(int) = Load : r6_7, mu0_2
# 28| r6_8(int) = Load : &:r6_7, ~mu0_2
# 28| r6_9(int) = Add : r6_4, r6_8
# 28| m6_10(int) = Store : r6_0, r6_9
# 28| m6_10(int) = Store : &:r6_0, r6_9
# 13| r6_11(glval<int>) = VariableAddress[#return] :
# 13| v6_12(void) = ReturnValue : r6_11, m6_10
# 13| v6_12(void) = ReturnValue : &:r6_11, m6_10
# 13| v6_13(void) = UnmodeledUse : mu*
# 13| v6_14(void) = ExitFunction :
@@ -89,9 +89,9 @@ ssa.cpp:
# 34| v0_4(void) = NoOp :
# 35| r0_5(glval<int>) = VariableAddress[#return] :
# 35| r0_6(int) = Constant[0] :
# 35| m0_7(int) = Store : r0_5, r0_6
# 35| m0_7(int) = Store : &:r0_5, r0_6
# 31| r0_8(glval<int>) = VariableAddress[#return] :
# 31| v0_9(void) = ReturnValue : r0_8, m0_7
# 31| v0_9(void) = ReturnValue : &:r0_8, m0_7
# 31| v0_10(void) = UnmodeledUse : mu*
# 31| v0_11(void) = ExitFunction :
@@ -101,15 +101,15 @@ ssa.cpp:
# 38| mu0_1(unknown) = AliasedDefinition :
# 38| mu0_2(unknown) = UnmodeledDefinition :
# 38| r0_3(glval<bool>) = VariableAddress[b] :
# 38| m0_4(bool) = InitializeParameter[b] : r0_3
# 38| m0_4(bool) = InitializeParameter[b] : &:r0_3
# 39| r0_5(glval<int>) = VariableAddress[x] :
# 39| r0_6(int) = Constant[5] :
# 39| m0_7(int) = Store : r0_5, r0_6
# 39| m0_7(int) = Store : &:r0_5, r0_6
# 40| r0_8(glval<int>) = VariableAddress[y] :
# 40| r0_9(int) = Constant[10] :
# 40| m0_10(int) = Store : r0_8, r0_9
# 40| m0_10(int) = Store : &:r0_8, r0_9
# 41| r0_11(glval<bool>) = VariableAddress[b] :
# 41| r0_12(bool) = Load : r0_11, m0_4
# 41| r0_12(bool) = Load : &:r0_11, m0_4
# 41| v0_13(void) = ConditionalBranch : r0_12
#-----| False -> Block 5
#-----| True -> Block 2
@@ -117,15 +117,15 @@ ssa.cpp:
# 38| Block 1
# 38| m1_0(int) = Phi : from 3:m3_2, from 4:m4_2, from 6:m6_2, from 7:m7_2
# 38| r1_1(glval<int>) = VariableAddress[#return] :
# 38| v1_2(void) = ReturnValue : r1_1, m1_0
# 38| v1_2(void) = ReturnValue : &:r1_1, m1_0
# 38| v1_3(void) = UnmodeledUse : mu*
# 38| v1_4(void) = ExitFunction :
# 42| Block 2
# 42| r2_0(glval<int>) = VariableAddress[x] :
# 42| r2_1(int) = Load : r2_0, m0_7
# 42| r2_1(int) = Load : &:r2_0, m0_7
# 42| r2_2(glval<int>) = VariableAddress[y] :
# 42| r2_3(int) = Load : r2_2, m0_10
# 42| r2_3(int) = Load : &:r2_2, m0_10
# 42| r2_4(bool) = CompareEQ : r2_1, r2_3
# 42| v2_5(void) = ConditionalBranch : r2_4
#-----| False -> Block 4
@@ -134,20 +134,20 @@ ssa.cpp:
# 43| Block 3
# 43| r3_0(glval<int>) = VariableAddress[#return] :
# 43| r3_1(int) = Constant[1] :
# 43| m3_2(int) = Store : r3_0, r3_1
# 43| m3_2(int) = Store : &:r3_0, r3_1
#-----| Goto -> Block 1
# 46| Block 4
# 46| r4_0(glval<int>) = VariableAddress[#return] :
# 46| r4_1(int) = Constant[0] :
# 46| m4_2(int) = Store : r4_0, r4_1
# 46| m4_2(int) = Store : &:r4_0, r4_1
#-----| Goto -> Block 1
# 50| Block 5
# 50| r5_0(glval<int>) = VariableAddress[x] :
# 50| r5_1(int) = Load : r5_0, m0_7
# 50| r5_1(int) = Load : &:r5_0, m0_7
# 50| r5_2(glval<int>) = VariableAddress[y] :
# 50| r5_3(int) = Load : r5_2, m0_10
# 50| r5_3(int) = Load : &:r5_2, m0_10
# 50| r5_4(bool) = CompareLT : r5_1, r5_3
# 50| v5_5(void) = ConditionalBranch : r5_4
#-----| False -> Block 7
@@ -156,13 +156,13 @@ ssa.cpp:
# 51| Block 6
# 51| r6_0(glval<int>) = VariableAddress[#return] :
# 51| r6_1(int) = Constant[0] :
# 51| m6_2(int) = Store : r6_0, r6_1
# 51| m6_2(int) = Store : &:r6_0, r6_1
#-----| Goto -> Block 1
# 54| Block 7
# 54| r7_0(glval<int>) = VariableAddress[#return] :
# 54| r7_1(int) = Constant[1] :
# 54| m7_2(int) = Store : r7_0, r7_1
# 54| m7_2(int) = Store : &:r7_0, r7_1
#-----| Goto -> Block 1
# 59| int DoWhileFalse()
@@ -172,12 +172,12 @@ ssa.cpp:
# 59| mu0_2(unknown) = UnmodeledDefinition :
# 60| r0_3(glval<int>) = VariableAddress[i] :
# 60| r0_4(int) = Constant[0] :
# 60| m0_5(int) = Store : r0_3, r0_4
# 60| m0_5(int) = Store : &:r0_3, r0_4
# 62| r0_6(glval<int>) = VariableAddress[i] :
# 62| r0_7(int) = Load : r0_6, m0_5
# 62| r0_7(int) = Load : &:r0_6, m0_5
# 62| r0_8(int) = Constant[1] :
# 62| r0_9(int) = Add : r0_7, r0_8
# 62| m0_10(int) = Store : r0_6, r0_9
# 62| m0_10(int) = Store : &:r0_6, r0_9
# 63| r0_11(bool) = Constant[0] :
# 63| v0_12(void) = ConditionalBranch : r0_11
#-----| False -> Block 1
@@ -186,10 +186,10 @@ ssa.cpp:
# 65| Block 1
# 65| r1_0(glval<int>) = VariableAddress[#return] :
# 65| r1_1(glval<int>) = VariableAddress[i] :
# 65| r1_2(int) = Load : r1_1, m0_10
# 65| m1_3(int) = Store : r1_0, r1_2
# 65| r1_2(int) = Load : &:r1_1, m0_10
# 65| m1_3(int) = Store : &:r1_0, r1_2
# 59| r1_4(glval<int>) = VariableAddress[#return] :
# 59| v1_5(void) = ReturnValue : r1_4, m1_3
# 59| v1_5(void) = ReturnValue : &:r1_4, m1_3
# 59| v1_6(void) = UnmodeledUse : mu*
# 59| v1_7(void) = ExitFunction :
@@ -202,19 +202,19 @@ ssa.cpp:
# 68| mu0_1(unknown) = AliasedDefinition :
# 68| mu0_2(unknown) = UnmodeledDefinition :
# 68| r0_3(glval<int>) = VariableAddress[n] :
# 68| m0_4(int) = InitializeParameter[n] : r0_3
# 68| m0_4(int) = InitializeParameter[n] : &:r0_3
# 68| r0_5(glval<char *>) = VariableAddress[p] :
# 68| m0_6(char *) = InitializeParameter[p] : r0_5
# 68| m0_6(char *) = InitializeParameter[p] : &:r0_5
#-----| Goto -> Block 3
# 70| Block 1
# 70| r1_0(char) = Constant[0] :
# 70| r1_1(glval<char *>) = VariableAddress[p] :
# 70| r1_2(char *) = Load : r1_1, m3_1
# 70| r1_2(char *) = Load : &:r1_1, m3_1
# 70| r1_3(int) = Constant[1] :
# 70| r1_4(char *) = PointerAdd[1] : r1_2, r1_3
# 70| m1_5(char *) = Store : r1_1, r1_4
# 70| mu1_6(char) = Store : r1_2, r1_0
# 70| m1_5(char *) = Store : &:r1_1, r1_4
# 70| mu1_6(char) = Store : &:r1_2, r1_0
#-----| Goto (back edge) -> Block 3
# 71| Block 2
@@ -227,159 +227,459 @@ ssa.cpp:
# 69| m3_0(int) = Phi : from 0:m0_4, from 1:m3_6
# 69| m3_1(char *) = Phi : from 0:m0_6, from 1:m1_5
# 69| r3_2(glval<int>) = VariableAddress[n] :
# 69| r3_3(int) = Load : r3_2, m3_0
# 69| r3_3(int) = Load : &:r3_2, m3_0
# 69| r3_4(int) = Constant[1] :
# 69| r3_5(int) = Sub : r3_3, r3_4
# 69| m3_6(int) = Store : r3_2, r3_5
# 69| m3_6(int) = Store : &:r3_2, r3_5
# 69| r3_7(int) = Constant[0] :
# 69| r3_8(bool) = CompareGT : r3_3, r3_7
# 69| v3_9(void) = ConditionalBranch : r3_8
#-----| False -> Block 2
#-----| True -> Block 1
# 75| void MustExactlyOverlap(Point)
# 75| void ScalarPhi(bool)
# 75| Block 0
# 75| v0_0(void) = EnterFunction :
# 75| mu0_1(unknown) = AliasedDefinition :
# 75| mu0_2(unknown) = UnmodeledDefinition :
# 75| r0_3(glval<Point>) = VariableAddress[a] :
# 75| m0_4(Point) = InitializeParameter[a] : r0_3
# 76| r0_5(glval<Point>) = VariableAddress[b] :
# 76| r0_6(glval<Point>) = VariableAddress[a] :
# 76| r0_7(Point) = Load : r0_6, m0_4
# 76| m0_8(Point) = Store : r0_5, r0_7
# 77| v0_9(void) = NoOp :
# 75| v0_10(void) = ReturnVoid :
# 75| v0_11(void) = UnmodeledUse : mu*
# 75| v0_12(void) = ExitFunction :
# 75| r0_3(glval<bool>) = VariableAddress[b] :
# 75| m0_4(bool) = InitializeParameter[b] : &:r0_3
# 76| r0_5(glval<int>) = VariableAddress[x] :
# 76| r0_6(int) = Constant[0] :
# 76| m0_7(int) = Store : &:r0_5, r0_6
# 77| r0_8(glval<int>) = VariableAddress[y] :
# 77| r0_9(int) = Constant[1] :
# 77| m0_10(int) = Store : &:r0_8, r0_9
# 78| r0_11(glval<int>) = VariableAddress[z] :
# 78| r0_12(int) = Constant[2] :
# 78| m0_13(int) = Store : &:r0_11, r0_12
# 79| r0_14(glval<bool>) = VariableAddress[b] :
# 79| r0_15(bool) = Load : &:r0_14, m0_4
# 79| v0_16(void) = ConditionalBranch : r0_15
#-----| False -> Block 2
#-----| True -> Block 1
# 79| void MustExactlyOverlapEscaped(Point)
# 79| Block 0
# 79| v0_0(void) = EnterFunction :
# 79| mu0_1(unknown) = AliasedDefinition :
# 79| mu0_2(unknown) = UnmodeledDefinition :
# 79| r0_3(glval<Point>) = VariableAddress[a] :
# 79| mu0_4(Point) = InitializeParameter[a] : r0_3
# 80| r0_5(glval<Point>) = VariableAddress[b] :
# 80| r0_6(glval<Point>) = VariableAddress[a] :
# 80| r0_7(Point) = Load : r0_6, mu0_2
# 80| m0_8(Point) = Store : r0_5, r0_7
# 81| r0_9(glval<unknown>) = FunctionAddress[Escape] :
# 81| r0_10(glval<Point>) = VariableAddress[a] :
# 81| r0_11(void *) = Convert : r0_10
# 81| v0_12(void) = Call : r0_9, r0_11
# 81| mu0_13(unknown) = ^CallSideEffect : mu0_2
# 82| v0_14(void) = NoOp :
# 79| v0_15(void) = ReturnVoid :
# 79| v0_16(void) = UnmodeledUse : mu*
# 79| v0_17(void) = ExitFunction :
# 80| Block 1
# 80| r1_0(int) = Constant[3] :
# 80| r1_1(glval<int>) = VariableAddress[x] :
# 80| m1_2(int) = Store : &:r1_1, r1_0
# 81| r1_3(int) = Constant[4] :
# 81| r1_4(glval<int>) = VariableAddress[y] :
# 81| m1_5(int) = Store : &:r1_4, r1_3
#-----| Goto -> Block 3
# 84| void MustTotallyOverlap(Point)
# 84| Block 0
# 84| v0_0(void) = EnterFunction :
# 84| mu0_1(unknown) = AliasedDefinition :
# 84| mu0_2(unknown) = UnmodeledDefinition :
# 84| r0_3(glval<Point>) = VariableAddress[a] :
# 84| mu0_4(Point) = InitializeParameter[a] : r0_3
# 85| r0_5(glval<int>) = VariableAddress[x] :
# 85| r0_6(glval<Point>) = VariableAddress[a] :
# 85| r0_7(glval<int>) = FieldAddress[x] : r0_6
# 85| r0_8(int) = Load : r0_7, mu0_2
# 85| m0_9(int) = Store : r0_5, r0_8
# 86| r0_10(glval<int>) = VariableAddress[y] :
# 86| r0_11(glval<Point>) = VariableAddress[a] :
# 86| r0_12(glval<int>) = FieldAddress[y] : r0_11
# 86| r0_13(int) = Load : r0_12, mu0_2
# 86| m0_14(int) = Store : r0_10, r0_13
# 87| v0_15(void) = NoOp :
# 84| v0_16(void) = ReturnVoid :
# 84| v0_17(void) = UnmodeledUse : mu*
# 84| v0_18(void) = ExitFunction :
# 84| Block 2
# 84| r2_0(int) = Constant[5] :
# 84| r2_1(glval<int>) = VariableAddress[x] :
# 84| m2_2(int) = Store : &:r2_1, r2_0
#-----| Goto -> Block 3
# 89| void MustTotallyOverlapEscaped(Point)
# 89| Block 0
# 89| v0_0(void) = EnterFunction :
# 89| mu0_1(unknown) = AliasedDefinition :
# 89| mu0_2(unknown) = UnmodeledDefinition :
# 89| r0_3(glval<Point>) = VariableAddress[a] :
# 89| mu0_4(Point) = InitializeParameter[a] : r0_3
# 90| r0_5(glval<int>) = VariableAddress[x] :
# 90| r0_6(glval<Point>) = VariableAddress[a] :
# 90| r0_7(glval<int>) = FieldAddress[x] : r0_6
# 90| r0_8(int) = Load : r0_7, mu0_2
# 90| m0_9(int) = Store : r0_5, r0_8
# 91| r0_10(glval<int>) = VariableAddress[y] :
# 91| r0_11(glval<Point>) = VariableAddress[a] :
# 91| r0_12(glval<int>) = FieldAddress[y] : r0_11
# 91| r0_13(int) = Load : r0_12, mu0_2
# 91| m0_14(int) = Store : r0_10, r0_13
# 92| r0_15(glval<unknown>) = FunctionAddress[Escape] :
# 92| r0_16(glval<Point>) = VariableAddress[a] :
# 92| r0_17(void *) = Convert : r0_16
# 92| v0_18(void) = Call : r0_15, r0_17
# 92| mu0_19(unknown) = ^CallSideEffect : mu0_2
# 93| v0_20(void) = NoOp :
# 89| v0_21(void) = ReturnVoid :
# 89| v0_22(void) = UnmodeledUse : mu*
# 89| v0_23(void) = ExitFunction :
# 86| Block 3
# 86| m3_0(int) = Phi : from 1:m1_2, from 2:m2_2
# 86| m3_1(int) = Phi : from 1:m1_5, from 2:m0_10
# 86| r3_2(glval<int>) = VariableAddress[x_merge] :
# 86| r3_3(glval<int>) = VariableAddress[x] :
# 86| r3_4(int) = Load : &:r3_3, m3_0
# 86| m3_5(int) = Store : &:r3_2, r3_4
# 87| r3_6(glval<int>) = VariableAddress[y_merge] :
# 87| r3_7(glval<int>) = VariableAddress[y] :
# 87| r3_8(int) = Load : &:r3_7, m3_1
# 87| m3_9(int) = Store : &:r3_6, r3_8
# 88| r3_10(glval<int>) = VariableAddress[z_merge] :
# 88| r3_11(glval<int>) = VariableAddress[z] :
# 88| r3_12(int) = Load : &:r3_11, m0_13
# 88| m3_13(int) = Store : &:r3_10, r3_12
# 89| v3_14(void) = NoOp :
# 75| v3_15(void) = ReturnVoid :
# 75| v3_16(void) = UnmodeledUse : mu*
# 75| v3_17(void) = ExitFunction :
# 95| void MayPartiallyOverlap(int, int)
# 91| void MustExactlyOverlap(Point)
# 91| Block 0
# 91| v0_0(void) = EnterFunction :
# 91| mu0_1(unknown) = AliasedDefinition :
# 91| mu0_2(unknown) = UnmodeledDefinition :
# 91| r0_3(glval<Point>) = VariableAddress[a] :
# 91| m0_4(Point) = InitializeParameter[a] : &:r0_3
# 92| r0_5(glval<Point>) = VariableAddress[b] :
# 92| r0_6(glval<Point>) = VariableAddress[a] :
# 92| r0_7(Point) = Load : &:r0_6, m0_4
# 92| m0_8(Point) = Store : &:r0_5, r0_7
# 93| v0_9(void) = NoOp :
# 91| v0_10(void) = ReturnVoid :
# 91| v0_11(void) = UnmodeledUse : mu*
# 91| v0_12(void) = ExitFunction :
# 95| void MustExactlyOverlapEscaped(Point)
# 95| Block 0
# 95| v0_0(void) = EnterFunction :
# 95| mu0_1(unknown) = AliasedDefinition :
# 95| mu0_2(unknown) = UnmodeledDefinition :
# 95| r0_3(glval<int>) = VariableAddress[x] :
# 95| m0_4(int) = InitializeParameter[x] : r0_3
# 95| r0_5(glval<int>) = VariableAddress[y] :
# 95| m0_6(int) = InitializeParameter[y] : r0_5
# 96| r0_7(glval<Point>) = VariableAddress[a] :
# 96| mu0_8(Point) = Uninitialized[a] : r0_7
# 96| r0_9(glval<int>) = FieldAddress[x] : r0_7
# 96| r0_10(glval<int>) = VariableAddress[x] :
# 96| r0_11(int) = Load : r0_10, m0_4
# 96| mu0_12(int) = Store : r0_9, r0_11
# 96| r0_13(glval<int>) = FieldAddress[y] : r0_7
# 96| r0_14(glval<int>) = VariableAddress[y] :
# 96| r0_15(int) = Load : r0_14, m0_6
# 96| mu0_16(int) = Store : r0_13, r0_15
# 97| r0_17(glval<Point>) = VariableAddress[b] :
# 97| r0_18(glval<Point>) = VariableAddress[a] :
# 97| r0_19(Point) = Load : r0_18, mu0_2
# 97| m0_20(Point) = Store : r0_17, r0_19
# 98| v0_21(void) = NoOp :
# 95| v0_22(void) = ReturnVoid :
# 95| v0_23(void) = UnmodeledUse : mu*
# 95| v0_24(void) = ExitFunction :
# 95| v0_0(void) = EnterFunction :
# 95| mu0_1(unknown) = AliasedDefinition :
# 95| mu0_2(unknown) = UnmodeledDefinition :
# 95| r0_3(glval<Point>) = VariableAddress[a] :
# 95| mu0_4(Point) = InitializeParameter[a] : &:r0_3
# 96| r0_5(glval<Point>) = VariableAddress[b] :
# 96| r0_6(glval<Point>) = VariableAddress[a] :
# 96| r0_7(Point) = Load : &:r0_6, ~mu0_2
# 96| m0_8(Point) = Store : &:r0_5, r0_7
# 97| r0_9(glval<unknown>) = FunctionAddress[Escape] :
# 97| r0_10(glval<Point>) = VariableAddress[a] :
# 97| r0_11(void *) = Convert : r0_10
# 97| v0_12(void) = Call : func:r0_9, 0:r0_11
# 97| mu0_13(unknown) = ^CallSideEffect : ~mu0_2
# 98| v0_14(void) = NoOp :
# 95| v0_15(void) = ReturnVoid :
# 95| v0_16(void) = UnmodeledUse : mu*
# 95| v0_17(void) = ExitFunction :
# 100| void MayPartiallyOverlapEscaped(int, int)
# 100| void MustTotallyOverlap(Point)
# 100| Block 0
# 100| v0_0(void) = EnterFunction :
# 100| mu0_1(unknown) = AliasedDefinition :
# 100| mu0_2(unknown) = UnmodeledDefinition :
# 100| r0_3(glval<int>) = VariableAddress[x] :
# 100| m0_4(int) = InitializeParameter[x] : r0_3
# 100| r0_5(glval<int>) = VariableAddress[y] :
# 100| m0_6(int) = InitializeParameter[y] : r0_5
# 101| r0_7(glval<Point>) = VariableAddress[a] :
# 101| mu0_8(Point) = Uninitialized[a] : r0_7
# 101| r0_9(glval<int>) = FieldAddress[x] : r0_7
# 101| r0_10(glval<int>) = VariableAddress[x] :
# 101| r0_11(int) = Load : r0_10, m0_4
# 101| mu0_12(int) = Store : r0_9, r0_11
# 101| r0_13(glval<int>) = FieldAddress[y] : r0_7
# 101| r0_14(glval<int>) = VariableAddress[y] :
# 101| r0_15(int) = Load : r0_14, m0_6
# 101| mu0_16(int) = Store : r0_13, r0_15
# 102| r0_17(glval<Point>) = VariableAddress[b] :
# 102| r0_18(glval<Point>) = VariableAddress[a] :
# 102| r0_19(Point) = Load : r0_18, mu0_2
# 102| m0_20(Point) = Store : r0_17, r0_19
# 103| r0_21(glval<unknown>) = FunctionAddress[Escape] :
# 103| r0_22(glval<Point>) = VariableAddress[a] :
# 103| r0_23(void *) = Convert : r0_22
# 103| v0_24(void) = Call : r0_21, r0_23
# 103| mu0_25(unknown) = ^CallSideEffect : mu0_2
# 104| v0_26(void) = NoOp :
# 100| v0_27(void) = ReturnVoid :
# 100| v0_28(void) = UnmodeledUse : mu*
# 100| v0_29(void) = ExitFunction :
# 100| v0_0(void) = EnterFunction :
# 100| mu0_1(unknown) = AliasedDefinition :
# 100| mu0_2(unknown) = UnmodeledDefinition :
# 100| r0_3(glval<Point>) = VariableAddress[a] :
# 100| mu0_4(Point) = InitializeParameter[a] : &:r0_3
# 101| r0_5(glval<int>) = VariableAddress[x] :
# 101| r0_6(glval<Point>) = VariableAddress[a] :
# 101| r0_7(glval<int>) = FieldAddress[x] : r0_6
# 101| r0_8(int) = Load : &:r0_7, ~mu0_2
# 101| m0_9(int) = Store : &:r0_5, r0_8
# 102| r0_10(glval<int>) = VariableAddress[y] :
# 102| r0_11(glval<Point>) = VariableAddress[a] :
# 102| r0_12(glval<int>) = FieldAddress[y] : r0_11
# 102| r0_13(int) = Load : &:r0_12, ~mu0_2
# 102| m0_14(int) = Store : &:r0_10, r0_13
# 103| v0_15(void) = NoOp :
# 100| v0_16(void) = ReturnVoid :
# 100| v0_17(void) = UnmodeledUse : mu*
# 100| v0_18(void) = ExitFunction :
# 105| void MustTotallyOverlapEscaped(Point)
# 105| Block 0
# 105| v0_0(void) = EnterFunction :
# 105| mu0_1(unknown) = AliasedDefinition :
# 105| mu0_2(unknown) = UnmodeledDefinition :
# 105| r0_3(glval<Point>) = VariableAddress[a] :
# 105| mu0_4(Point) = InitializeParameter[a] : &:r0_3
# 106| r0_5(glval<int>) = VariableAddress[x] :
# 106| r0_6(glval<Point>) = VariableAddress[a] :
# 106| r0_7(glval<int>) = FieldAddress[x] : r0_6
# 106| r0_8(int) = Load : &:r0_7, ~mu0_2
# 106| m0_9(int) = Store : &:r0_5, r0_8
# 107| r0_10(glval<int>) = VariableAddress[y] :
# 107| r0_11(glval<Point>) = VariableAddress[a] :
# 107| r0_12(glval<int>) = FieldAddress[y] : r0_11
# 107| r0_13(int) = Load : &:r0_12, ~mu0_2
# 107| m0_14(int) = Store : &:r0_10, r0_13
# 108| r0_15(glval<unknown>) = FunctionAddress[Escape] :
# 108| r0_16(glval<Point>) = VariableAddress[a] :
# 108| r0_17(void *) = Convert : r0_16
# 108| v0_18(void) = Call : func:r0_15, 0:r0_17
# 108| mu0_19(unknown) = ^CallSideEffect : ~mu0_2
# 109| v0_20(void) = NoOp :
# 105| v0_21(void) = ReturnVoid :
# 105| v0_22(void) = UnmodeledUse : mu*
# 105| v0_23(void) = ExitFunction :
# 111| void MayPartiallyOverlap(int, int)
# 111| Block 0
# 111| v0_0(void) = EnterFunction :
# 111| mu0_1(unknown) = AliasedDefinition :
# 111| mu0_2(unknown) = UnmodeledDefinition :
# 111| r0_3(glval<int>) = VariableAddress[x] :
# 111| m0_4(int) = InitializeParameter[x] : &:r0_3
# 111| r0_5(glval<int>) = VariableAddress[y] :
# 111| m0_6(int) = InitializeParameter[y] : &:r0_5
# 112| r0_7(glval<Point>) = VariableAddress[a] :
# 112| mu0_8(Point) = Uninitialized[a] : &:r0_7
# 112| r0_9(glval<int>) = FieldAddress[x] : r0_7
# 112| r0_10(glval<int>) = VariableAddress[x] :
# 112| r0_11(int) = Load : &:r0_10, m0_4
# 112| mu0_12(int) = Store : &:r0_9, r0_11
# 112| r0_13(glval<int>) = FieldAddress[y] : r0_7
# 112| r0_14(glval<int>) = VariableAddress[y] :
# 112| r0_15(int) = Load : &:r0_14, m0_6
# 112| mu0_16(int) = Store : &:r0_13, r0_15
# 113| r0_17(glval<Point>) = VariableAddress[b] :
# 113| r0_18(glval<Point>) = VariableAddress[a] :
# 113| r0_19(Point) = Load : &:r0_18, ~mu0_2
# 113| m0_20(Point) = Store : &:r0_17, r0_19
# 114| v0_21(void) = NoOp :
# 111| v0_22(void) = ReturnVoid :
# 111| v0_23(void) = UnmodeledUse : mu*
# 111| v0_24(void) = ExitFunction :
# 116| void MayPartiallyOverlapEscaped(int, int)
# 116| Block 0
# 116| v0_0(void) = EnterFunction :
# 116| mu0_1(unknown) = AliasedDefinition :
# 116| mu0_2(unknown) = UnmodeledDefinition :
# 116| r0_3(glval<int>) = VariableAddress[x] :
# 116| m0_4(int) = InitializeParameter[x] : &:r0_3
# 116| r0_5(glval<int>) = VariableAddress[y] :
# 116| m0_6(int) = InitializeParameter[y] : &:r0_5
# 117| r0_7(glval<Point>) = VariableAddress[a] :
# 117| mu0_8(Point) = Uninitialized[a] : &:r0_7
# 117| r0_9(glval<int>) = FieldAddress[x] : r0_7
# 117| r0_10(glval<int>) = VariableAddress[x] :
# 117| r0_11(int) = Load : &:r0_10, m0_4
# 117| mu0_12(int) = Store : &:r0_9, r0_11
# 117| r0_13(glval<int>) = FieldAddress[y] : r0_7
# 117| r0_14(glval<int>) = VariableAddress[y] :
# 117| r0_15(int) = Load : &:r0_14, m0_6
# 117| mu0_16(int) = Store : &:r0_13, r0_15
# 118| r0_17(glval<Point>) = VariableAddress[b] :
# 118| r0_18(glval<Point>) = VariableAddress[a] :
# 118| r0_19(Point) = Load : &:r0_18, ~mu0_2
# 118| m0_20(Point) = Store : &:r0_17, r0_19
# 119| r0_21(glval<unknown>) = FunctionAddress[Escape] :
# 119| r0_22(glval<Point>) = VariableAddress[a] :
# 119| r0_23(void *) = Convert : r0_22
# 119| v0_24(void) = Call : func:r0_21, 0:r0_23
# 119| mu0_25(unknown) = ^CallSideEffect : ~mu0_2
# 120| v0_26(void) = NoOp :
# 116| v0_27(void) = ReturnVoid :
# 116| v0_28(void) = UnmodeledUse : mu*
# 116| v0_29(void) = ExitFunction :
# 122| void MergeMustExactlyOverlap(bool, int, int)
# 122| Block 0
# 122| v0_0(void) = EnterFunction :
# 122| mu0_1(unknown) = AliasedDefinition :
# 122| mu0_2(unknown) = UnmodeledDefinition :
# 122| r0_3(glval<bool>) = VariableAddress[c] :
# 122| m0_4(bool) = InitializeParameter[c] : &:r0_3
# 122| r0_5(glval<int>) = VariableAddress[x1] :
# 122| m0_6(int) = InitializeParameter[x1] : &:r0_5
# 122| r0_7(glval<int>) = VariableAddress[x2] :
# 122| m0_8(int) = InitializeParameter[x2] : &:r0_7
# 123| r0_9(glval<Point>) = VariableAddress[a] :
# 123| mu0_10(Point) = Uninitialized[a] : &:r0_9
# 123| r0_11(glval<int>) = FieldAddress[x] : r0_9
# 123| r0_12(int) = Constant[0] :
# 123| mu0_13(int) = Store : &:r0_11, r0_12
# 123| r0_14(glval<int>) = FieldAddress[y] : r0_9
# 123| r0_15(int) = Constant[0] :
# 123| mu0_16(int) = Store : &:r0_14, r0_15
# 124| r0_17(glval<bool>) = VariableAddress[c] :
# 124| r0_18(bool) = Load : &:r0_17, m0_4
# 124| v0_19(void) = ConditionalBranch : r0_18
#-----| False -> Block 2
#-----| True -> Block 1
# 125| Block 1
# 125| r1_0(glval<int>) = VariableAddress[x1] :
# 125| r1_1(int) = Load : &:r1_0, m0_6
# 125| r1_2(glval<Point>) = VariableAddress[a] :
# 125| r1_3(glval<int>) = FieldAddress[x] : r1_2
# 125| mu1_4(int) = Store : &:r1_3, r1_1
#-----| Goto -> Block 3
# 128| Block 2
# 128| r2_0(glval<int>) = VariableAddress[x2] :
# 128| r2_1(int) = Load : &:r2_0, m0_8
# 128| r2_2(glval<Point>) = VariableAddress[a] :
# 128| r2_3(glval<int>) = FieldAddress[x] : r2_2
# 128| mu2_4(int) = Store : &:r2_3, r2_1
#-----| Goto -> Block 3
# 130| Block 3
# 130| r3_0(glval<int>) = VariableAddress[x] :
# 130| r3_1(glval<Point>) = VariableAddress[a] :
# 130| r3_2(glval<int>) = FieldAddress[x] : r3_1
# 130| r3_3(int) = Load : &:r3_2, ~mu0_2
# 130| m3_4(int) = Store : &:r3_0, r3_3
# 131| r3_5(glval<Point>) = VariableAddress[b] :
# 131| r3_6(glval<Point>) = VariableAddress[a] :
# 131| r3_7(Point) = Load : &:r3_6, ~mu0_2
# 131| m3_8(Point) = Store : &:r3_5, r3_7
# 132| v3_9(void) = NoOp :
# 122| v3_10(void) = ReturnVoid :
# 122| v3_11(void) = UnmodeledUse : mu*
# 122| v3_12(void) = ExitFunction :
# 134| void MergeMustExactlyWithMustTotallyOverlap(bool, Point, int)
# 134| Block 0
# 134| v0_0(void) = EnterFunction :
# 134| mu0_1(unknown) = AliasedDefinition :
# 134| mu0_2(unknown) = UnmodeledDefinition :
# 134| r0_3(glval<bool>) = VariableAddress[c] :
# 134| m0_4(bool) = InitializeParameter[c] : &:r0_3
# 134| r0_5(glval<Point>) = VariableAddress[p] :
# 134| m0_6(Point) = InitializeParameter[p] : &:r0_5
# 134| r0_7(glval<int>) = VariableAddress[x1] :
# 134| m0_8(int) = InitializeParameter[x1] : &:r0_7
# 135| r0_9(glval<Point>) = VariableAddress[a] :
# 135| mu0_10(Point) = Uninitialized[a] : &:r0_9
# 135| r0_11(glval<int>) = FieldAddress[x] : r0_9
# 135| r0_12(int) = Constant[0] :
# 135| mu0_13(int) = Store : &:r0_11, r0_12
# 135| r0_14(glval<int>) = FieldAddress[y] : r0_9
# 135| r0_15(int) = Constant[0] :
# 135| mu0_16(int) = Store : &:r0_14, r0_15
# 136| r0_17(glval<bool>) = VariableAddress[c] :
# 136| r0_18(bool) = Load : &:r0_17, m0_4
# 136| v0_19(void) = ConditionalBranch : r0_18
#-----| False -> Block 2
#-----| True -> Block 1
# 137| Block 1
# 137| r1_0(glval<int>) = VariableAddress[x1] :
# 137| r1_1(int) = Load : &:r1_0, m0_8
# 137| r1_2(glval<Point>) = VariableAddress[a] :
# 137| r1_3(glval<int>) = FieldAddress[x] : r1_2
# 137| mu1_4(int) = Store : &:r1_3, r1_1
#-----| Goto -> Block 3
# 140| Block 2
# 140| r2_0(glval<Point>) = VariableAddress[p] :
# 140| r2_1(Point) = Load : &:r2_0, m0_6
# 140| r2_2(glval<Point>) = VariableAddress[a] :
# 140| mu2_3(Point) = Store : &:r2_2, r2_1
#-----| Goto -> Block 3
# 142| Block 3
# 142| r3_0(glval<int>) = VariableAddress[x] :
# 142| r3_1(glval<Point>) = VariableAddress[a] :
# 142| r3_2(glval<int>) = FieldAddress[x] : r3_1
# 142| r3_3(int) = Load : &:r3_2, ~mu0_2
# 142| m3_4(int) = Store : &:r3_0, r3_3
# 143| v3_5(void) = NoOp :
# 134| v3_6(void) = ReturnVoid :
# 134| v3_7(void) = UnmodeledUse : mu*
# 134| v3_8(void) = ExitFunction :
# 145| void MergeMustExactlyWithMayPartiallyOverlap(bool, Point, int)
# 145| Block 0
# 145| v0_0(void) = EnterFunction :
# 145| mu0_1(unknown) = AliasedDefinition :
# 145| mu0_2(unknown) = UnmodeledDefinition :
# 145| r0_3(glval<bool>) = VariableAddress[c] :
# 145| m0_4(bool) = InitializeParameter[c] : &:r0_3
# 145| r0_5(glval<Point>) = VariableAddress[p] :
# 145| m0_6(Point) = InitializeParameter[p] : &:r0_5
# 145| r0_7(glval<int>) = VariableAddress[x1] :
# 145| m0_8(int) = InitializeParameter[x1] : &:r0_7
# 146| r0_9(glval<Point>) = VariableAddress[a] :
# 146| mu0_10(Point) = Uninitialized[a] : &:r0_9
# 146| r0_11(glval<int>) = FieldAddress[x] : r0_9
# 146| r0_12(int) = Constant[0] :
# 146| mu0_13(int) = Store : &:r0_11, r0_12
# 146| r0_14(glval<int>) = FieldAddress[y] : r0_9
# 146| r0_15(int) = Constant[0] :
# 146| mu0_16(int) = Store : &:r0_14, r0_15
# 147| r0_17(glval<bool>) = VariableAddress[c] :
# 147| r0_18(bool) = Load : &:r0_17, m0_4
# 147| v0_19(void) = ConditionalBranch : r0_18
#-----| False -> Block 2
#-----| True -> Block 1
# 148| Block 1
# 148| r1_0(glval<int>) = VariableAddress[x1] :
# 148| r1_1(int) = Load : &:r1_0, m0_8
# 148| r1_2(glval<Point>) = VariableAddress[a] :
# 148| r1_3(glval<int>) = FieldAddress[x] : r1_2
# 148| mu1_4(int) = Store : &:r1_3, r1_1
#-----| Goto -> Block 3
# 151| Block 2
# 151| r2_0(glval<Point>) = VariableAddress[p] :
# 151| r2_1(Point) = Load : &:r2_0, m0_6
# 151| r2_2(glval<Point>) = VariableAddress[a] :
# 151| mu2_3(Point) = Store : &:r2_2, r2_1
#-----| Goto -> Block 3
# 153| Block 3
# 153| r3_0(glval<Point>) = VariableAddress[b] :
# 153| r3_1(glval<Point>) = VariableAddress[a] :
# 153| r3_2(Point) = Load : &:r3_1, ~mu0_2
# 153| m3_3(Point) = Store : &:r3_0, r3_2
# 154| v3_4(void) = NoOp :
# 145| v3_5(void) = ReturnVoid :
# 145| v3_6(void) = UnmodeledUse : mu*
# 145| v3_7(void) = ExitFunction :
# 156| void MergeMustTotallyOverlapWithMayPartiallyOverlap(bool, Rect, int)
# 156| Block 0
# 156| v0_0(void) = EnterFunction :
# 156| mu0_1(unknown) = AliasedDefinition :
# 156| mu0_2(unknown) = UnmodeledDefinition :
# 156| r0_3(glval<bool>) = VariableAddress[c] :
# 156| m0_4(bool) = InitializeParameter[c] : &:r0_3
# 156| r0_5(glval<Rect>) = VariableAddress[r] :
# 156| m0_6(Rect) = InitializeParameter[r] : &:r0_5
# 156| r0_7(glval<int>) = VariableAddress[x1] :
# 156| m0_8(int) = InitializeParameter[x1] : &:r0_7
# 157| r0_9(glval<Rect>) = VariableAddress[a] :
# 157| mu0_10(Rect) = Uninitialized[a] : &:r0_9
# 157| r0_11(glval<Point>) = FieldAddress[topLeft] : r0_9
# 157| r0_12(Point) = Constant[0] :
# 157| mu0_13(Point) = Store : &:r0_11, r0_12
# 157| r0_14(glval<Point>) = FieldAddress[bottomRight] : r0_9
# 157| r0_15(Point) = Constant[0] :
# 157| mu0_16(Point) = Store : &:r0_14, r0_15
# 158| r0_17(glval<bool>) = VariableAddress[c] :
# 158| r0_18(bool) = Load : &:r0_17, m0_4
# 158| v0_19(void) = ConditionalBranch : r0_18
#-----| False -> Block 2
#-----| True -> Block 1
# 159| Block 1
# 159| r1_0(glval<int>) = VariableAddress[x1] :
# 159| r1_1(int) = Load : &:r1_0, m0_8
# 159| r1_2(glval<Rect>) = VariableAddress[a] :
# 159| r1_3(glval<Point>) = FieldAddress[topLeft] : r1_2
# 159| r1_4(glval<int>) = FieldAddress[x] : r1_3
# 159| mu1_5(int) = Store : &:r1_4, r1_1
#-----| Goto -> Block 3
# 162| Block 2
# 162| r2_0(glval<Rect>) = VariableAddress[r] :
# 162| r2_1(Rect) = Load : &:r2_0, m0_6
# 162| r2_2(glval<Rect>) = VariableAddress[a] :
# 162| mu2_3(Rect) = Store : &:r2_2, r2_1
#-----| Goto -> Block 3
# 164| Block 3
# 164| r3_0(glval<Point>) = VariableAddress[b] :
# 164| r3_1(glval<Rect>) = VariableAddress[a] :
# 164| r3_2(glval<Point>) = FieldAddress[topLeft] : r3_1
# 164| r3_3(Point) = Load : &:r3_2, ~mu0_2
# 164| m3_4(Point) = Store : &:r3_0, r3_3
# 165| v3_5(void) = NoOp :
# 156| v3_6(void) = ReturnVoid :
# 156| v3_7(void) = UnmodeledUse : mu*
# 156| v3_8(void) = ExitFunction :
# 171| void WrapperStruct(Wrapper)
# 171| Block 0
# 171| v0_0(void) = EnterFunction :
# 171| mu0_1(unknown) = AliasedDefinition :
# 171| mu0_2(unknown) = UnmodeledDefinition :
# 171| r0_3(glval<Wrapper>) = VariableAddress[w] :
# 171| mu0_4(Wrapper) = InitializeParameter[w] : &:r0_3
# 172| r0_5(glval<Wrapper>) = VariableAddress[x] :
# 172| r0_6(glval<Wrapper>) = VariableAddress[w] :
# 172| r0_7(Wrapper) = Load : &:r0_6, ~mu0_2
# 172| m0_8(Wrapper) = Store : &:r0_5, r0_7
# 173| r0_9(glval<int>) = VariableAddress[a] :
# 173| r0_10(glval<Wrapper>) = VariableAddress[w] :
# 173| r0_11(glval<int>) = FieldAddress[f] : r0_10
# 173| r0_12(int) = Load : &:r0_11, ~mu0_2
# 173| m0_13(int) = Store : &:r0_9, r0_12
# 174| r0_14(int) = Constant[5] :
# 174| r0_15(glval<Wrapper>) = VariableAddress[w] :
# 174| r0_16(glval<int>) = FieldAddress[f] : r0_15
# 174| mu0_17(int) = Store : &:r0_16, r0_14
# 175| r0_18(glval<Wrapper>) = VariableAddress[w] :
# 175| r0_19(glval<int>) = FieldAddress[f] : r0_18
# 175| r0_20(int) = Load : &:r0_19, ~mu0_2
# 175| r0_21(glval<int>) = VariableAddress[a] :
# 175| m0_22(int) = Store : &:r0_21, r0_20
# 176| r0_23(glval<Wrapper>) = VariableAddress[w] :
# 176| r0_24(Wrapper) = Load : &:r0_23, ~mu0_2
# 176| r0_25(glval<Wrapper>) = VariableAddress[x] :
# 176| m0_26(Wrapper) = Store : &:r0_25, r0_24
# 177| v0_27(void) = NoOp :
# 171| v0_28(void) = ReturnVoid :
# 171| v0_29(void) = UnmodeledUse : mu*
# 171| v0_30(void) = ExitFunction :

View File

@@ -8,64 +8,64 @@ test.cpp:
# 1| valnum = unique
# 1| r0_3(glval<int>) = VariableAddress[p0] :
# 1| valnum = r0_3
# 1| m0_4(int) = InitializeParameter[p0] : r0_3
# 1| m0_4(int) = InitializeParameter[p0] : &:r0_3
# 1| valnum = m0_4
# 1| r0_5(glval<int>) = VariableAddress[p1] :
# 1| valnum = r0_5
# 1| m0_6(int) = InitializeParameter[p1] : r0_5
# 1| m0_6(int) = InitializeParameter[p1] : &:r0_5
# 1| valnum = m0_6
# 2| r0_7(glval<int>) = VariableAddress[x] :
# 2| valnum = r0_7
# 2| m0_8(int) = Uninitialized[x] : r0_7
# 2| m0_8(int) = Uninitialized[x] : &:r0_7
# 2| valnum = unique
# 2| r0_9(glval<int>) = VariableAddress[y] :
# 2| valnum = r0_9
# 2| m0_10(int) = Uninitialized[y] : r0_9
# 2| m0_10(int) = Uninitialized[y] : &:r0_9
# 2| valnum = unique
# 3| r0_11(glval<unsigned char>) = VariableAddress[b] :
# 3| valnum = unique
# 3| m0_12(unsigned char) = Uninitialized[b] : r0_11
# 3| m0_12(unsigned char) = Uninitialized[b] : &:r0_11
# 3| valnum = unique
# 5| r0_13(glval<int>) = VariableAddress[p0] :
# 5| valnum = r0_3
# 5| r0_14(int) = Load : r0_13, m0_4
# 5| r0_14(int) = Load : &:r0_13, m0_4
# 5| valnum = m0_4
# 5| r0_15(glval<int>) = VariableAddress[p1] :
# 5| valnum = r0_5
# 5| r0_16(int) = Load : r0_15, m0_6
# 5| r0_16(int) = Load : &:r0_15, m0_6
# 5| valnum = m0_6
# 5| r0_17(int) = Add : r0_14, r0_16
# 5| valnum = r0_17
# 5| r0_18(glval<int>) = VariableAddress[x] :
# 5| valnum = r0_7
# 5| m0_19(int) = Store : r0_18, r0_17
# 5| m0_19(int) = Store : &:r0_18, r0_17
# 5| valnum = r0_17
# 6| r0_20(glval<int>) = VariableAddress[p0] :
# 6| valnum = r0_3
# 6| r0_21(int) = Load : r0_20, m0_4
# 6| r0_21(int) = Load : &:r0_20, m0_4
# 6| valnum = m0_4
# 6| r0_22(glval<int>) = VariableAddress[p1] :
# 6| valnum = r0_5
# 6| r0_23(int) = Load : r0_22, m0_6
# 6| r0_23(int) = Load : &:r0_22, m0_6
# 6| valnum = m0_6
# 6| r0_24(int) = Add : r0_21, r0_23
# 6| valnum = r0_17
# 6| r0_25(glval<int>) = VariableAddress[x] :
# 6| valnum = r0_7
# 6| m0_26(int) = Store : r0_25, r0_24
# 6| m0_26(int) = Store : &:r0_25, r0_24
# 6| valnum = r0_17
# 7| r0_27(glval<int>) = VariableAddress[x] :
# 7| valnum = r0_7
# 7| r0_28(int) = Load : r0_27, m0_26
# 7| r0_28(int) = Load : &:r0_27, m0_26
# 7| valnum = r0_17
# 7| r0_29(glval<int>) = VariableAddress[y] :
# 7| valnum = r0_9
# 7| m0_30(int) = Store : r0_29, r0_28
# 7| m0_30(int) = Store : &:r0_29, r0_28
# 7| valnum = r0_17
# 8| v0_31(void) = NoOp :
# 1| r0_32(glval<int>) = VariableAddress[#return] :
# 1| valnum = unique
# 1| v0_33(void) = ReturnValue : r0_32
# 1| v0_33(void) = ReturnValue : &:r0_32
# 1| v0_34(void) = UnmodeledUse : mu*
# 1| v0_35(void) = ExitFunction :
@@ -78,76 +78,76 @@ test.cpp:
# 12| valnum = unique
# 12| r0_3(glval<int>) = VariableAddress[p0] :
# 12| valnum = r0_3
# 12| m0_4(int) = InitializeParameter[p0] : r0_3
# 12| m0_4(int) = InitializeParameter[p0] : &:r0_3
# 12| valnum = m0_4
# 12| r0_5(glval<int>) = VariableAddress[p1] :
# 12| valnum = r0_5
# 12| m0_6(int) = InitializeParameter[p1] : r0_5
# 12| m0_6(int) = InitializeParameter[p1] : &:r0_5
# 12| valnum = m0_6
# 13| r0_7(glval<int>) = VariableAddress[x] :
# 13| valnum = r0_7
# 13| m0_8(int) = Uninitialized[x] : r0_7
# 13| m0_8(int) = Uninitialized[x] : &:r0_7
# 13| valnum = unique
# 13| r0_9(glval<int>) = VariableAddress[y] :
# 13| valnum = r0_9
# 13| m0_10(int) = Uninitialized[y] : r0_9
# 13| m0_10(int) = Uninitialized[y] : &:r0_9
# 13| valnum = unique
# 14| r0_11(glval<unsigned char>) = VariableAddress[b] :
# 14| valnum = unique
# 14| m0_12(unsigned char) = Uninitialized[b] : r0_11
# 14| m0_12(unsigned char) = Uninitialized[b] : &:r0_11
# 14| valnum = unique
# 16| r0_13(glval<int>) = VariableAddress[p0] :
# 16| valnum = r0_3
# 16| r0_14(int) = Load : r0_13, m0_4
# 16| r0_14(int) = Load : &:r0_13, m0_4
# 16| valnum = m0_4
# 16| r0_15(glval<int>) = VariableAddress[p1] :
# 16| valnum = r0_5
# 16| r0_16(int) = Load : r0_15, m0_6
# 16| r0_16(int) = Load : &:r0_15, m0_6
# 16| valnum = m0_6
# 16| r0_17(int) = Add : r0_14, r0_16
# 16| valnum = r0_17
# 16| r0_18(glval<int>) = VariableAddress[global01] :
# 16| valnum = r0_18
# 16| r0_19(int) = Load : r0_18, m0_1
# 16| r0_19(int) = Load : &:r0_18, ~m0_1
# 16| valnum = unique
# 16| r0_20(int) = Add : r0_17, r0_19
# 16| valnum = r0_20
# 16| r0_21(glval<int>) = VariableAddress[x] :
# 16| valnum = r0_7
# 16| m0_22(int) = Store : r0_21, r0_20
# 16| m0_22(int) = Store : &:r0_21, r0_20
# 16| valnum = r0_20
# 17| r0_23(glval<int>) = VariableAddress[p0] :
# 17| valnum = r0_3
# 17| r0_24(int) = Load : r0_23, m0_4
# 17| r0_24(int) = Load : &:r0_23, m0_4
# 17| valnum = m0_4
# 17| r0_25(glval<int>) = VariableAddress[p1] :
# 17| valnum = r0_5
# 17| r0_26(int) = Load : r0_25, m0_6
# 17| r0_26(int) = Load : &:r0_25, m0_6
# 17| valnum = m0_6
# 17| r0_27(int) = Add : r0_24, r0_26
# 17| valnum = r0_17
# 17| r0_28(glval<int>) = VariableAddress[global01] :
# 17| valnum = r0_18
# 17| r0_29(int) = Load : r0_28, m0_1
# 17| r0_29(int) = Load : &:r0_28, ~m0_1
# 17| valnum = unique
# 17| r0_30(int) = Add : r0_27, r0_29
# 17| valnum = r0_30
# 17| r0_31(glval<int>) = VariableAddress[x] :
# 17| valnum = r0_7
# 17| m0_32(int) = Store : r0_31, r0_30
# 17| m0_32(int) = Store : &:r0_31, r0_30
# 17| valnum = r0_30
# 18| r0_33(glval<int>) = VariableAddress[x] :
# 18| valnum = r0_7
# 18| r0_34(int) = Load : r0_33, m0_32
# 18| r0_34(int) = Load : &:r0_33, m0_32
# 18| valnum = r0_30
# 18| r0_35(glval<int>) = VariableAddress[y] :
# 18| valnum = r0_9
# 18| m0_36(int) = Store : r0_35, r0_34
# 18| m0_36(int) = Store : &:r0_35, r0_34
# 18| valnum = r0_30
# 19| v0_37(void) = NoOp :
# 12| r0_38(glval<int>) = VariableAddress[#return] :
# 12| valnum = unique
# 12| v0_39(void) = ReturnValue : r0_38
# 12| v0_39(void) = ReturnValue : &:r0_38
# 12| v0_40(void) = UnmodeledUse : mu*
# 12| v0_41(void) = ExitFunction :
@@ -160,83 +160,83 @@ test.cpp:
# 25| valnum = unique
# 25| r0_3(glval<int>) = VariableAddress[p0] :
# 25| valnum = r0_3
# 25| m0_4(int) = InitializeParameter[p0] : r0_3
# 25| m0_4(int) = InitializeParameter[p0] : &:r0_3
# 25| valnum = m0_4
# 25| r0_5(glval<int>) = VariableAddress[p1] :
# 25| valnum = r0_5
# 25| m0_6(int) = InitializeParameter[p1] : r0_5
# 25| m0_6(int) = InitializeParameter[p1] : &:r0_5
# 25| valnum = m0_6
# 26| r0_7(glval<int>) = VariableAddress[x] :
# 26| valnum = r0_7
# 26| m0_8(int) = Uninitialized[x] : r0_7
# 26| m0_8(int) = Uninitialized[x] : &:r0_7
# 26| valnum = unique
# 26| r0_9(glval<int>) = VariableAddress[y] :
# 26| valnum = r0_9
# 26| m0_10(int) = Uninitialized[y] : r0_9
# 26| m0_10(int) = Uninitialized[y] : &:r0_9
# 26| valnum = unique
# 27| r0_11(glval<unsigned char>) = VariableAddress[b] :
# 27| valnum = unique
# 27| m0_12(unsigned char) = Uninitialized[b] : r0_11
# 27| m0_12(unsigned char) = Uninitialized[b] : &:r0_11
# 27| valnum = unique
# 29| r0_13(glval<int>) = VariableAddress[p0] :
# 29| valnum = r0_3
# 29| r0_14(int) = Load : r0_13, m0_4
# 29| r0_14(int) = Load : &:r0_13, m0_4
# 29| valnum = m0_4
# 29| r0_15(glval<int>) = VariableAddress[p1] :
# 29| valnum = r0_5
# 29| r0_16(int) = Load : r0_15, m0_6
# 29| r0_16(int) = Load : &:r0_15, m0_6
# 29| valnum = m0_6
# 29| r0_17(int) = Add : r0_14, r0_16
# 29| valnum = r0_17
# 29| r0_18(glval<int>) = VariableAddress[global02] :
# 29| valnum = r0_18
# 29| r0_19(int) = Load : r0_18, m0_1
# 29| r0_19(int) = Load : &:r0_18, ~m0_1
# 29| valnum = unique
# 29| r0_20(int) = Add : r0_17, r0_19
# 29| valnum = r0_20
# 29| r0_21(glval<int>) = VariableAddress[x] :
# 29| valnum = r0_7
# 29| m0_22(int) = Store : r0_21, r0_20
# 29| m0_22(int) = Store : &:r0_21, r0_20
# 29| valnum = r0_20
# 30| r0_23(glval<unknown>) = FunctionAddress[change_global02] :
# 30| valnum = unique
# 30| v0_24(void) = Call : r0_23
# 30| m0_25(unknown) = ^CallSideEffect : m0_1
# 30| v0_24(void) = Call : func:r0_23
# 30| m0_25(unknown) = ^CallSideEffect : ~m0_1
# 30| valnum = unique
# 30| m0_26(unknown) = Chi : m0_1, m0_25
# 30| m0_26(unknown) = Chi : total:m0_1, partial:m0_25
# 30| valnum = unique
# 31| r0_27(glval<int>) = VariableAddress[p0] :
# 31| valnum = r0_3
# 31| r0_28(int) = Load : r0_27, m0_4
# 31| r0_28(int) = Load : &:r0_27, m0_4
# 31| valnum = m0_4
# 31| r0_29(glval<int>) = VariableAddress[p1] :
# 31| valnum = r0_5
# 31| r0_30(int) = Load : r0_29, m0_6
# 31| r0_30(int) = Load : &:r0_29, m0_6
# 31| valnum = m0_6
# 31| r0_31(int) = Add : r0_28, r0_30
# 31| valnum = r0_17
# 31| r0_32(glval<int>) = VariableAddress[global02] :
# 31| valnum = r0_18
# 31| r0_33(int) = Load : r0_32, m0_26
# 31| r0_33(int) = Load : &:r0_32, ~m0_26
# 31| valnum = unique
# 31| r0_34(int) = Add : r0_31, r0_33
# 31| valnum = r0_34
# 31| r0_35(glval<int>) = VariableAddress[x] :
# 31| valnum = r0_7
# 31| m0_36(int) = Store : r0_35, r0_34
# 31| m0_36(int) = Store : &:r0_35, r0_34
# 31| valnum = r0_34
# 32| r0_37(glval<int>) = VariableAddress[x] :
# 32| valnum = r0_7
# 32| r0_38(int) = Load : r0_37, m0_36
# 32| r0_38(int) = Load : &:r0_37, m0_36
# 32| valnum = r0_34
# 32| r0_39(glval<int>) = VariableAddress[y] :
# 32| valnum = r0_9
# 32| m0_40(int) = Store : r0_39, r0_38
# 32| m0_40(int) = Store : &:r0_39, r0_38
# 32| valnum = r0_34
# 33| v0_41(void) = NoOp :
# 25| r0_42(glval<int>) = VariableAddress[#return] :
# 25| valnum = unique
# 25| v0_43(void) = ReturnValue : r0_42
# 25| v0_43(void) = ReturnValue : &:r0_42
# 25| v0_44(void) = UnmodeledUse : mu*
# 25| v0_45(void) = ExitFunction :
@@ -249,90 +249,90 @@ test.cpp:
# 39| valnum = unique
# 39| r0_3(glval<int>) = VariableAddress[p0] :
# 39| valnum = r0_3
# 39| m0_4(int) = InitializeParameter[p0] : r0_3
# 39| m0_4(int) = InitializeParameter[p0] : &:r0_3
# 39| valnum = m0_4
# 39| r0_5(glval<int>) = VariableAddress[p1] :
# 39| valnum = r0_5
# 39| m0_6(int) = InitializeParameter[p1] : r0_5
# 39| m0_6(int) = InitializeParameter[p1] : &:r0_5
# 39| valnum = m0_6
# 39| r0_7(glval<int *>) = VariableAddress[p2] :
# 39| valnum = r0_7
# 39| m0_8(int *) = InitializeParameter[p2] : r0_7
# 39| m0_8(int *) = InitializeParameter[p2] : &:r0_7
# 39| valnum = m0_8
# 40| r0_9(glval<int>) = VariableAddress[x] :
# 40| valnum = r0_9
# 40| m0_10(int) = Uninitialized[x] : r0_9
# 40| m0_10(int) = Uninitialized[x] : &:r0_9
# 40| valnum = unique
# 40| r0_11(glval<int>) = VariableAddress[y] :
# 40| valnum = r0_11
# 40| m0_12(int) = Uninitialized[y] : r0_11
# 40| m0_12(int) = Uninitialized[y] : &:r0_11
# 40| valnum = unique
# 41| r0_13(glval<unsigned char>) = VariableAddress[b] :
# 41| valnum = unique
# 41| m0_14(unsigned char) = Uninitialized[b] : r0_13
# 41| m0_14(unsigned char) = Uninitialized[b] : &:r0_13
# 41| valnum = unique
# 43| r0_15(glval<int>) = VariableAddress[p0] :
# 43| valnum = r0_3
# 43| r0_16(int) = Load : r0_15, m0_4
# 43| r0_16(int) = Load : &:r0_15, m0_4
# 43| valnum = m0_4
# 43| r0_17(glval<int>) = VariableAddress[p1] :
# 43| valnum = r0_5
# 43| r0_18(int) = Load : r0_17, m0_6
# 43| r0_18(int) = Load : &:r0_17, m0_6
# 43| valnum = m0_6
# 43| r0_19(int) = Add : r0_16, r0_18
# 43| valnum = r0_19
# 43| r0_20(glval<int>) = VariableAddress[global03] :
# 43| valnum = r0_20
# 43| r0_21(int) = Load : r0_20, m0_1
# 43| r0_21(int) = Load : &:r0_20, ~m0_1
# 43| valnum = unique
# 43| r0_22(int) = Add : r0_19, r0_21
# 43| valnum = r0_22
# 43| r0_23(glval<int>) = VariableAddress[x] :
# 43| valnum = r0_9
# 43| m0_24(int) = Store : r0_23, r0_22
# 43| m0_24(int) = Store : &:r0_23, r0_22
# 43| valnum = r0_22
# 44| r0_25(int) = Constant[0] :
# 44| valnum = r0_25
# 44| r0_26(glval<int *>) = VariableAddress[p2] :
# 44| valnum = r0_7
# 44| r0_27(int *) = Load : r0_26, m0_8
# 44| r0_27(int *) = Load : &:r0_26, m0_8
# 44| valnum = m0_8
# 44| m0_28(int) = Store : r0_27, r0_25
# 44| m0_28(int) = Store : &:r0_27, r0_25
# 44| valnum = r0_25
# 44| m0_29(unknown) = Chi : m0_1, m0_28
# 44| m0_29(unknown) = Chi : total:m0_1, partial:m0_28
# 44| valnum = unique
# 45| r0_30(glval<int>) = VariableAddress[p0] :
# 45| valnum = r0_3
# 45| r0_31(int) = Load : r0_30, m0_4
# 45| r0_31(int) = Load : &:r0_30, m0_4
# 45| valnum = m0_4
# 45| r0_32(glval<int>) = VariableAddress[p1] :
# 45| valnum = r0_5
# 45| r0_33(int) = Load : r0_32, m0_6
# 45| r0_33(int) = Load : &:r0_32, m0_6
# 45| valnum = m0_6
# 45| r0_34(int) = Add : r0_31, r0_33
# 45| valnum = r0_19
# 45| r0_35(glval<int>) = VariableAddress[global03] :
# 45| valnum = r0_20
# 45| r0_36(int) = Load : r0_35, m0_29
# 45| r0_36(int) = Load : &:r0_35, ~m0_29
# 45| valnum = unique
# 45| r0_37(int) = Add : r0_34, r0_36
# 45| valnum = r0_37
# 45| r0_38(glval<int>) = VariableAddress[x] :
# 45| valnum = r0_9
# 45| m0_39(int) = Store : r0_38, r0_37
# 45| m0_39(int) = Store : &:r0_38, r0_37
# 45| valnum = r0_37
# 46| r0_40(glval<int>) = VariableAddress[x] :
# 46| valnum = r0_9
# 46| r0_41(int) = Load : r0_40, m0_39
# 46| r0_41(int) = Load : &:r0_40, m0_39
# 46| valnum = r0_37
# 46| r0_42(glval<int>) = VariableAddress[y] :
# 46| valnum = r0_11
# 46| m0_43(int) = Store : r0_42, r0_41
# 46| m0_43(int) = Store : &:r0_42, r0_41
# 46| valnum = r0_37
# 47| v0_44(void) = NoOp :
# 39| r0_45(glval<int>) = VariableAddress[#return] :
# 39| valnum = unique
# 39| v0_46(void) = ReturnValue : r0_45
# 39| v0_46(void) = ReturnValue : &:r0_45
# 39| v0_47(void) = UnmodeledUse : mu*
# 39| v0_48(void) = ExitFunction :
@@ -345,21 +345,21 @@ test.cpp:
# 49| valnum = unique
# 49| r0_3(glval<char *>) = VariableAddress[str] :
# 49| valnum = r0_3
# 49| m0_4(char *) = InitializeParameter[str] : r0_3
# 49| m0_4(char *) = InitializeParameter[str] : &:r0_3
# 49| valnum = m0_4
# 49| r0_5(glval<char *>) = VariableAddress[chars] :
# 49| valnum = r0_5
# 49| m0_6(char *) = InitializeParameter[chars] : r0_5
# 49| m0_6(char *) = InitializeParameter[chars] : &:r0_5
# 49| valnum = m0_6
# 50| r0_7(glval<char *>) = VariableAddress[ptr] :
# 50| valnum = r0_7
# 50| m0_8(char *) = Uninitialized[ptr] : r0_7
# 50| m0_8(char *) = Uninitialized[ptr] : &:r0_7
# 50| valnum = unique
# 51| r0_9(glval<unsigned int>) = VariableAddress[result] :
# 51| valnum = r0_9
# 51| r0_10(unsigned int) = Constant[0] :
# 51| valnum = r0_10
# 51| m0_11(unsigned int) = Store : r0_9, r0_10
# 51| m0_11(unsigned int) = Store : &:r0_9, r0_10
# 51| valnum = r0_10
#-----| Goto -> Block 1
@@ -368,9 +368,9 @@ test.cpp:
# 53| valnum = m1_0
# 53| r1_1(glval<char *>) = VariableAddress[str] :
# 53| valnum = r0_3
# 53| r1_2(char *) = Load : r1_1, m0_4
# 53| r1_2(char *) = Load : &:r1_1, m0_4
# 53| valnum = m0_4
# 53| r1_3(char) = Load : r1_2, m0_1
# 53| r1_3(char) = Load : &:r1_2, ~m0_1
# 53| valnum = unique
# 53| r1_4(int) = Convert : r1_3
# 53| valnum = unique
@@ -385,11 +385,11 @@ test.cpp:
# 55| Block 2
# 55| r2_0(glval<char *>) = VariableAddress[chars] :
# 55| valnum = r0_5
# 55| r2_1(char *) = Load : r2_0, m0_6
# 55| r2_1(char *) = Load : &:r2_0, m0_6
# 55| valnum = m0_6
# 55| r2_2(glval<char *>) = VariableAddress[ptr] :
# 55| valnum = r0_7
# 55| m2_3(char *) = Store : r2_2, r2_1
# 55| m2_3(char *) = Store : &:r2_2, r2_1
# 55| valnum = m0_6
#-----| Goto -> Block 3
@@ -398,17 +398,17 @@ test.cpp:
# 56| valnum = m3_0
# 56| r3_1(glval<char *>) = VariableAddress[ptr] :
# 56| valnum = r0_7
# 56| r3_2(char *) = Load : r3_1, m3_0
# 56| r3_2(char *) = Load : &:r3_1, m3_0
# 56| valnum = m3_0
# 56| r3_3(char) = Load : r3_2, m0_1
# 56| r3_3(char) = Load : &:r3_2, ~m0_1
# 56| valnum = unique
# 56| r3_4(int) = Convert : r3_3
# 56| valnum = unique
# 56| r3_5(glval<char *>) = VariableAddress[str] :
# 56| valnum = r0_3
# 56| r3_6(char *) = Load : r3_5, m0_4
# 56| r3_6(char *) = Load : &:r3_5, m0_4
# 56| valnum = m0_4
# 56| r3_7(char) = Load : r3_6, m0_1
# 56| r3_7(char) = Load : &:r3_6, ~m0_1
# 56| valnum = unique
# 56| r3_8(int) = Convert : r3_7
# 56| valnum = unique
@@ -421,9 +421,9 @@ test.cpp:
# 56| Block 4
# 56| r4_0(glval<char *>) = VariableAddress[ptr] :
# 56| valnum = r0_7
# 56| r4_1(char *) = Load : r4_0, m3_0
# 56| r4_1(char *) = Load : &:r4_0, m3_0
# 56| valnum = m3_0
# 56| r4_2(char) = Load : r4_1, m0_1
# 56| r4_2(char) = Load : &:r4_1, ~m0_1
# 56| valnum = unique
# 56| r4_3(int) = Convert : r4_2
# 56| valnum = unique
@@ -438,22 +438,22 @@ test.cpp:
# 56| Block 5
# 56| r5_0(glval<char *>) = VariableAddress[ptr] :
# 56| valnum = r0_7
# 56| r5_1(char *) = Load : r5_0, m3_0
# 56| r5_1(char *) = Load : &:r5_0, m3_0
# 56| valnum = m3_0
# 56| r5_2(int) = Constant[1] :
# 56| valnum = unique
# 56| r5_3(char *) = PointerAdd[1] : r5_1, r5_2
# 56| valnum = r5_3
# 56| m5_4(char *) = Store : r5_0, r5_3
# 56| m5_4(char *) = Store : &:r5_0, r5_3
# 56| valnum = r5_3
#-----| Goto (back edge) -> Block 3
# 59| Block 6
# 59| r6_0(glval<char *>) = VariableAddress[ptr] :
# 59| valnum = r0_7
# 59| r6_1(char *) = Load : r6_0, m3_0
# 59| r6_1(char *) = Load : &:r6_0, m3_0
# 59| valnum = m3_0
# 59| r6_2(char) = Load : r6_1, m0_1
# 59| r6_2(char) = Load : &:r6_1, ~m0_1
# 59| valnum = unique
# 59| r6_3(int) = Convert : r6_2
# 59| valnum = unique
@@ -472,13 +472,13 @@ test.cpp:
# 62| Block 8
# 62| r8_0(glval<unsigned int>) = VariableAddress[result] :
# 62| valnum = r0_9
# 62| r8_1(unsigned int) = Load : r8_0, m1_0
# 62| r8_1(unsigned int) = Load : &:r8_0, m1_0
# 62| valnum = m1_0
# 62| r8_2(unsigned int) = Constant[1] :
# 62| valnum = unique
# 62| r8_3(unsigned int) = Add : r8_1, r8_2
# 62| valnum = r8_3
# 62| m8_4(unsigned int) = Store : r8_0, r8_3
# 62| m8_4(unsigned int) = Store : &:r8_0, r8_3
# 62| valnum = r8_3
#-----| Goto (back edge) -> Block 1
@@ -488,13 +488,13 @@ test.cpp:
# 65| valnum = r9_1
# 65| r9_2(glval<unsigned int>) = VariableAddress[result] :
# 65| valnum = r0_9
# 65| r9_3(unsigned int) = Load : r9_2, m1_0
# 65| r9_3(unsigned int) = Load : &:r9_2, m1_0
# 65| valnum = m1_0
# 65| m9_4(unsigned int) = Store : r9_1, r9_3
# 65| m9_4(unsigned int) = Store : &:r9_1, r9_3
# 65| valnum = m1_0
# 49| r9_5(glval<unsigned int>) = VariableAddress[#return] :
# 49| valnum = r9_1
# 49| v9_6(void) = ReturnValue : r9_5, m9_4
# 49| v9_6(void) = ReturnValue : &:r9_5, m9_4
# 49| v9_7(void) = UnmodeledUse : mu*
# 49| v9_8(void) = ExitFunction :
@@ -507,45 +507,45 @@ test.cpp:
# 75| valnum = unique
# 75| r0_3(glval<two_values *>) = VariableAddress[vals] :
# 75| valnum = r0_3
# 75| m0_4(two_values *) = InitializeParameter[vals] : r0_3
# 75| m0_4(two_values *) = InitializeParameter[vals] : &:r0_3
# 75| valnum = m0_4
# 77| r0_5(glval<signed short>) = VariableAddress[v] :
# 77| valnum = r0_5
# 77| r0_6(glval<unknown>) = FunctionAddress[getAValue] :
# 77| valnum = unique
# 77| r0_7(int) = Call : r0_6
# 77| r0_7(int) = Call : func:r0_6
# 77| valnum = unique
# 77| m0_8(unknown) = ^CallSideEffect : m0_1
# 77| m0_8(unknown) = ^CallSideEffect : ~m0_1
# 77| valnum = unique
# 77| m0_9(unknown) = Chi : m0_1, m0_8
# 77| m0_9(unknown) = Chi : total:m0_1, partial:m0_8
# 77| valnum = unique
# 77| r0_10(signed short) = Convert : r0_7
# 77| valnum = r0_10
# 77| m0_11(signed short) = Store : r0_5, r0_10
# 77| m0_11(signed short) = Store : &:r0_5, r0_10
# 77| valnum = r0_10
# 79| r0_12(glval<signed short>) = VariableAddress[v] :
# 79| valnum = r0_5
# 79| r0_13(signed short) = Load : r0_12, m0_11
# 79| r0_13(signed short) = Load : &:r0_12, m0_11
# 79| valnum = r0_10
# 79| r0_14(int) = Convert : r0_13
# 79| valnum = unique
# 79| r0_15(glval<two_values *>) = VariableAddress[vals] :
# 79| valnum = r0_3
# 79| r0_16(two_values *) = Load : r0_15, m0_4
# 79| r0_16(two_values *) = Load : &:r0_15, m0_4
# 79| valnum = m0_4
# 79| r0_17(glval<signed short>) = FieldAddress[val1] : r0_16
# 79| valnum = unique
# 79| r0_18(signed short) = Load : r0_17, m0_9
# 79| r0_18(signed short) = Load : &:r0_17, ~m0_9
# 79| valnum = unique
# 79| r0_19(int) = Convert : r0_18
# 79| valnum = unique
# 79| r0_20(glval<two_values *>) = VariableAddress[vals] :
# 79| valnum = r0_3
# 79| r0_21(two_values *) = Load : r0_20, m0_4
# 79| r0_21(two_values *) = Load : &:r0_20, m0_4
# 79| valnum = m0_4
# 79| r0_22(glval<signed short>) = FieldAddress[val2] : r0_21
# 79| valnum = unique
# 79| r0_23(signed short) = Load : r0_22, m0_9
# 79| r0_23(signed short) = Load : &:r0_22, ~m0_9
# 79| valnum = unique
# 79| r0_24(int) = Convert : r0_23
# 79| valnum = unique
@@ -560,17 +560,17 @@ test.cpp:
# 80| Block 1
# 80| r1_0(glval<unknown>) = FunctionAddress[getAValue] :
# 80| valnum = unique
# 80| r1_1(int) = Call : r1_0
# 80| r1_1(int) = Call : func:r1_0
# 80| valnum = unique
# 80| m1_2(unknown) = ^CallSideEffect : m0_9
# 80| m1_2(unknown) = ^CallSideEffect : ~m0_9
# 80| valnum = unique
# 80| m1_3(unknown) = Chi : m0_9, m1_2
# 80| m1_3(unknown) = Chi : total:m0_9, partial:m1_2
# 80| valnum = unique
# 80| r1_4(signed short) = Convert : r1_1
# 80| valnum = r1_4
# 80| r1_5(glval<signed short>) = VariableAddress[v] :
# 80| valnum = r0_5
# 80| m1_6(signed short) = Store : r1_5, r1_4
# 80| m1_6(signed short) = Store : &:r1_5, r1_4
# 80| valnum = r1_4
#-----| Goto -> Block 2
@@ -589,23 +589,23 @@ test.cpp:
# 84| valnum = unique
# 84| r0_3(glval<int>) = VariableAddress[x] :
# 84| valnum = r0_3
# 84| m0_4(int) = InitializeParameter[x] : r0_3
# 84| m0_4(int) = InitializeParameter[x] : &:r0_3
# 84| valnum = m0_4
# 84| r0_5(glval<int>) = VariableAddress[y] :
# 84| valnum = r0_5
# 84| m0_6(int) = InitializeParameter[y] : r0_5
# 84| m0_6(int) = InitializeParameter[y] : &:r0_5
# 84| valnum = m0_6
# 84| r0_7(glval<void *>) = VariableAddress[p] :
# 84| valnum = r0_7
# 84| m0_8(void *) = InitializeParameter[p] : r0_7
# 84| m0_8(void *) = InitializeParameter[p] : &:r0_7
# 84| valnum = m0_8
# 86| r0_9(glval<int>) = VariableAddress[v] :
# 86| valnum = r0_9
# 86| m0_10(int) = Uninitialized[v] : r0_9
# 86| m0_10(int) = Uninitialized[v] : &:r0_9
# 86| valnum = unique
# 88| r0_11(glval<void *>) = VariableAddress[p] :
# 88| valnum = r0_7
# 88| r0_12(void *) = Load : r0_11, m0_8
# 88| r0_12(void *) = Load : &:r0_11, m0_8
# 88| valnum = m0_8
# 88| r0_13(void *) = Constant[0] :
# 88| valnum = unique
@@ -618,22 +618,22 @@ test.cpp:
# 88| Block 1
# 88| r1_0(glval<int>) = VariableAddress[x] :
# 88| valnum = r0_3
# 88| r1_1(int) = Load : r1_0, m0_4
# 88| r1_1(int) = Load : &:r1_0, m0_4
# 88| valnum = m0_4
# 88| r1_2(glval<int>) = VariableAddress[#temp88:7] :
# 88| valnum = r1_2
# 88| m1_3(int) = Store : r1_2, r1_1
# 88| m1_3(int) = Store : &:r1_2, r1_1
# 88| valnum = m0_4
#-----| Goto -> Block 3
# 88| Block 2
# 88| r2_0(glval<int>) = VariableAddress[y] :
# 88| valnum = r0_5
# 88| r2_1(int) = Load : r2_0, m0_6
# 88| r2_1(int) = Load : &:r2_0, m0_6
# 88| valnum = m0_6
# 88| r2_2(glval<int>) = VariableAddress[#temp88:7] :
# 88| valnum = r1_2
# 88| m2_3(int) = Store : r2_2, r2_1
# 88| m2_3(int) = Store : &:r2_2, r2_1
# 88| valnum = m0_6
#-----| Goto -> Block 3
@@ -642,11 +642,11 @@ test.cpp:
# 88| valnum = m3_0
# 88| r3_1(glval<int>) = VariableAddress[#temp88:7] :
# 88| valnum = r1_2
# 88| r3_2(int) = Load : r3_1, m3_0
# 88| r3_2(int) = Load : &:r3_1, m3_0
# 88| valnum = m3_0
# 88| r3_3(glval<int>) = VariableAddress[v] :
# 88| valnum = r0_9
# 88| m3_4(int) = Store : r3_3, r3_2
# 88| m3_4(int) = Store : &:r3_3, r3_2
# 88| valnum = m3_0
# 89| v3_5(void) = NoOp :
# 84| v3_6(void) = ReturnVoid :
@@ -666,21 +666,21 @@ test.cpp:
# 92| valnum = r0_4
# 92| r0_5(glval<int>) = VariableAddress[x] :
# 92| valnum = r0_3
# 92| m0_6(int) = Store : r0_5, r0_4
# 92| m0_6(int) = Store : &:r0_5, r0_4
# 92| valnum = r0_4
# 92| m0_7(int) = Store : r0_3, r0_4
# 92| m0_7(int) = Store : &:r0_3, r0_4
# 92| valnum = r0_4
# 93| r0_8(glval<int>) = VariableAddress[#return] :
# 93| valnum = r0_8
# 93| r0_9(glval<int>) = VariableAddress[x] :
# 93| valnum = r0_3
# 93| r0_10(int) = Load : r0_9, m0_7
# 93| r0_10(int) = Load : &:r0_9, m0_7
# 93| valnum = r0_4
# 93| m0_11(int) = Store : r0_8, r0_10
# 93| m0_11(int) = Store : &:r0_8, r0_10
# 93| valnum = r0_4
# 91| r0_12(glval<int>) = VariableAddress[#return] :
# 91| valnum = r0_8
# 91| v0_13(void) = ReturnValue : r0_12, m0_11
# 91| v0_13(void) = ReturnValue : &:r0_12, m0_11
# 91| v0_14(void) = UnmodeledUse : mu*
# 91| v0_15(void) = ExitFunction :
@@ -693,55 +693,55 @@ test.cpp:
# 104| valnum = unique
# 104| r0_3(glval<Derived *>) = VariableAddress[pd] :
# 104| valnum = r0_3
# 104| m0_4(Derived *) = InitializeParameter[pd] : r0_3
# 104| m0_4(Derived *) = InitializeParameter[pd] : &:r0_3
# 104| valnum = m0_4
# 105| r0_5(glval<int>) = VariableAddress[x] :
# 105| valnum = unique
# 105| r0_6(glval<Derived *>) = VariableAddress[pd] :
# 105| valnum = r0_3
# 105| r0_7(Derived *) = Load : r0_6, m0_4
# 105| r0_7(Derived *) = Load : &:r0_6, m0_4
# 105| valnum = m0_4
# 105| r0_8(Base *) = ConvertToBase[Derived : Base] : r0_7
# 105| valnum = r0_8
# 105| r0_9(glval<int>) = FieldAddress[b] : r0_8
# 105| valnum = r0_9
# 105| r0_10(int) = Load : r0_9, m0_1
# 105| r0_10(int) = Load : &:r0_9, ~m0_1
# 105| valnum = r0_10
# 105| m0_11(int) = Store : r0_5, r0_10
# 105| m0_11(int) = Store : &:r0_5, r0_10
# 105| valnum = r0_10
# 106| r0_12(glval<Base *>) = VariableAddress[pb] :
# 106| valnum = r0_12
# 106| r0_13(glval<Derived *>) = VariableAddress[pd] :
# 106| valnum = r0_3
# 106| r0_14(Derived *) = Load : r0_13, m0_4
# 106| r0_14(Derived *) = Load : &:r0_13, m0_4
# 106| valnum = m0_4
# 106| r0_15(Base *) = ConvertToBase[Derived : Base] : r0_14
# 106| valnum = r0_8
# 106| m0_16(Base *) = Store : r0_12, r0_15
# 106| m0_16(Base *) = Store : &:r0_12, r0_15
# 106| valnum = r0_8
# 107| r0_17(glval<int>) = VariableAddress[y] :
# 107| valnum = r0_17
# 107| r0_18(glval<Base *>) = VariableAddress[pb] :
# 107| valnum = r0_12
# 107| r0_19(Base *) = Load : r0_18, m0_16
# 107| r0_19(Base *) = Load : &:r0_18, m0_16
# 107| valnum = r0_8
# 107| r0_20(glval<int>) = FieldAddress[b] : r0_19
# 107| valnum = r0_9
# 107| r0_21(int) = Load : r0_20, m0_1
# 107| r0_21(int) = Load : &:r0_20, ~m0_1
# 107| valnum = r0_21
# 107| m0_22(int) = Store : r0_17, r0_21
# 107| m0_22(int) = Store : &:r0_17, r0_21
# 107| valnum = r0_21
# 109| r0_23(glval<int>) = VariableAddress[#return] :
# 109| valnum = r0_23
# 109| r0_24(glval<int>) = VariableAddress[y] :
# 109| valnum = r0_17
# 109| r0_25(int) = Load : r0_24, m0_22
# 109| r0_25(int) = Load : &:r0_24, m0_22
# 109| valnum = r0_21
# 109| m0_26(int) = Store : r0_23, r0_25
# 109| m0_26(int) = Store : &:r0_23, r0_25
# 109| valnum = r0_21
# 104| r0_27(glval<int>) = VariableAddress[#return] :
# 104| valnum = r0_23
# 104| v0_28(void) = ReturnValue : r0_27, m0_26
# 104| v0_28(void) = ReturnValue : &:r0_27, m0_26
# 104| v0_29(void) = UnmodeledUse : mu*
# 104| v0_30(void) = ExitFunction :