summaryrefslogtreecommitdiffstats
path: root/contrib/llvm/lib/Analysis/MemoryBuiltins.cpp
diff options
context:
space:
mode:
authordim <dim@FreeBSD.org>2016-12-26 20:36:37 +0000
committerdim <dim@FreeBSD.org>2016-12-26 20:36:37 +0000
commit06210ae42d418d50d8d9365d5c9419308ae9e7ee (patch)
treeab60b4cdd6e430dda1f292a46a77ddb744723f31 /contrib/llvm/lib/Analysis/MemoryBuiltins.cpp
parent2dd166267f53df1c3748b4325d294b9b839de74b (diff)
downloadFreeBSD-src-06210ae42d418d50d8d9365d5c9419308ae9e7ee.zip
FreeBSD-src-06210ae42d418d50d8d9365d5c9419308ae9e7ee.tar.gz
MFC r309124:
Upgrade our copies of clang, llvm, lldb, compiler-rt and libc++ to 3.9.0 release, and add lld 3.9.0. Also completely revamp the build system for clang, llvm, lldb and their related tools. Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11 support to build; see UPDATING for more information. Release notes for llvm, clang and lld are available here: <http://llvm.org/releases/3.9.0/docs/ReleaseNotes.html> <http://llvm.org/releases/3.9.0/tools/clang/docs/ReleaseNotes.html> <http://llvm.org/releases/3.9.0/tools/lld/docs/ReleaseNotes.html> Thanks to Ed Maste, Bryan Drewery, Andrew Turner, Antoine Brodin and Jan Beich for their help. Relnotes: yes MFC r309147: Pull in r282174 from upstream llvm trunk (by Krzysztof Parzyszek): [PPC] Set SP after loading data from stack frame, if no red zone is present Follow-up to r280705: Make sure that the SP is only restored after all data is loaded from the stack frame, if there is no red zone. This completes the fix for https://llvm.org/bugs/show_bug.cgi?id=26519. Differential Revision: https://reviews.llvm.org/D24466 Reported by: Mark Millard PR: 214433 MFC r309149: Pull in r283060 from upstream llvm trunk (by Hal Finkel): [PowerPC] Refactor soft-float support, and enable PPC64 soft float This change enables soft-float for PowerPC64, and also makes soft-float disable all vector instruction sets for both 32-bit and 64-bit modes. This latter part is necessary because the PPC backend canonicalizes many Altivec vector types to floating-point types, and so soft-float breaks scalarization support for many operations. Both for embedded targets and for operating-system kernels desiring soft-float support, it seems reasonable that disabling hardware floating-point also disables vector instructions (embedded targets without hardware floating point support are unlikely to have Altivec, etc. and operating system kernels desiring not to use floating-point registers to lower syscall cost are unlikely to want to use vector registers either). If someone needs this to work, we'll need to change the fact that we promote many Altivec operations to act on v4f32. To make it possible to disable Altivec when soft-float is enabled, hardware floating-point support needs to be expressed as a positive feature, like the others, and not a negative feature, because target features cannot have dependencies on the disabling of some other feature. So +soft-float has now become -hard-float. Fixes PR26970. Pull in r283061 from upstream clang trunk (by Hal Finkel): [PowerPC] Enable soft-float for PPC64, and +soft-float -> -hard-float Enable soft-float support on PPC64, as the backend now supports it. Also, the backend now uses -hard-float instead of +soft-float, so set the target features accordingly. Fixes PR26970. Reported by: Mark Millard PR: 214433 MFC r309212: Add a few missed clang 3.9.0 files to OptionalObsoleteFiles. MFC r309262: Fix packaging for clang, lldb and lld 3.9.0 During the upgrade of clang/llvm etc to 3.9.0 in r309124, the PACKAGE directive in the usr.bin/clang/*.mk files got dropped accidentally. Restore it, with a few minor changes and additions: * Correct license in clang.ucl to NCSA * Add PACKAGE=clang for clang and most of the "ll" tools * Put lldb in its own package * Put lld in its own package Reviewed by: gjb, jmallett Differential Revision: https://reviews.freebsd.org/D8666 MFC r309656: During the bootstrap phase, when building the minimal llvm library on PowerPC, add lib/Support/Atomic.cpp. This is needed because upstream llvm revision r271821 disabled the use of std::call_once, which causes some fallback functions from Atomic.cpp to be used instead. Reported by: Mark Millard PR: 214902 MFC r309835: Tentatively apply https://reviews.llvm.org/D18730 to work around gcc PR 70528 (bogus error: constructor required before non-static data member). This should fix buildworld with the external gcc package. Reported by: https://jenkins.freebsd.org/job/FreeBSD_HEAD_amd64_gcc/ MFC r310194: Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to 3.9.1 release. Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11 support to build; see UPDATING for more information. Release notes for llvm, clang and lld will be available here: <http://releases.llvm.org/3.9.1/docs/ReleaseNotes.html> <http://releases.llvm.org/3.9.1/tools/clang/docs/ReleaseNotes.html> <http://releases.llvm.org/3.9.1/tools/lld/docs/ReleaseNotes.html> Relnotes: yes
Diffstat (limited to 'contrib/llvm/lib/Analysis/MemoryBuiltins.cpp')
-rw-r--r--contrib/llvm/lib/Analysis/MemoryBuiltins.cpp246
1 files changed, 156 insertions, 90 deletions
diff --git a/contrib/llvm/lib/Analysis/MemoryBuiltins.cpp b/contrib/llvm/lib/Analysis/MemoryBuiltins.cpp
index 9e896ae..f234776 100644
--- a/contrib/llvm/lib/Analysis/MemoryBuiltins.cpp
+++ b/contrib/llvm/lib/Analysis/MemoryBuiltins.cpp
@@ -42,39 +42,38 @@ enum AllocType : uint8_t {
};
struct AllocFnsTy {
- LibFunc::Func Func;
AllocType AllocTy;
- unsigned char NumParams;
+ unsigned NumParams;
// First and Second size parameters (or -1 if unused)
- signed char FstParam, SndParam;
+ int FstParam, SndParam;
};
// FIXME: certain users need more information. E.g., SimplifyLibCalls needs to
// know which functions are nounwind, noalias, nocapture parameters, etc.
-static const AllocFnsTy AllocationFnData[] = {
- {LibFunc::malloc, MallocLike, 1, 0, -1},
- {LibFunc::valloc, MallocLike, 1, 0, -1},
- {LibFunc::Znwj, OpNewLike, 1, 0, -1}, // new(unsigned int)
- {LibFunc::ZnwjRKSt9nothrow_t, MallocLike, 2, 0, -1}, // new(unsigned int, nothrow)
- {LibFunc::Znwm, OpNewLike, 1, 0, -1}, // new(unsigned long)
- {LibFunc::ZnwmRKSt9nothrow_t, MallocLike, 2, 0, -1}, // new(unsigned long, nothrow)
- {LibFunc::Znaj, OpNewLike, 1, 0, -1}, // new[](unsigned int)
- {LibFunc::ZnajRKSt9nothrow_t, MallocLike, 2, 0, -1}, // new[](unsigned int, nothrow)
- {LibFunc::Znam, OpNewLike, 1, 0, -1}, // new[](unsigned long)
- {LibFunc::ZnamRKSt9nothrow_t, MallocLike, 2, 0, -1}, // new[](unsigned long, nothrow)
- {LibFunc::msvc_new_int, OpNewLike, 1, 0, -1}, // new(unsigned int)
- {LibFunc::msvc_new_int_nothrow, MallocLike, 2, 0, -1}, // new(unsigned int, nothrow)
- {LibFunc::msvc_new_longlong, OpNewLike, 1, 0, -1}, // new(unsigned long long)
- {LibFunc::msvc_new_longlong_nothrow, MallocLike, 2, 0, -1}, // new(unsigned long long, nothrow)
- {LibFunc::msvc_new_array_int, OpNewLike, 1, 0, -1}, // new[](unsigned int)
- {LibFunc::msvc_new_array_int_nothrow, MallocLike, 2, 0, -1}, // new[](unsigned int, nothrow)
- {LibFunc::msvc_new_array_longlong, OpNewLike, 1, 0, -1}, // new[](unsigned long long)
- {LibFunc::msvc_new_array_longlong_nothrow, MallocLike, 2, 0, -1}, // new[](unsigned long long, nothrow)
- {LibFunc::calloc, CallocLike, 2, 0, 1},
- {LibFunc::realloc, ReallocLike, 2, 1, -1},
- {LibFunc::reallocf, ReallocLike, 2, 1, -1},
- {LibFunc::strdup, StrDupLike, 1, -1, -1},
- {LibFunc::strndup, StrDupLike, 2, 1, -1}
+static const std::pair<LibFunc::Func, AllocFnsTy> AllocationFnData[] = {
+ {LibFunc::malloc, {MallocLike, 1, 0, -1}},
+ {LibFunc::valloc, {MallocLike, 1, 0, -1}},
+ {LibFunc::Znwj, {OpNewLike, 1, 0, -1}}, // new(unsigned int)
+ {LibFunc::ZnwjRKSt9nothrow_t, {MallocLike, 2, 0, -1}}, // new(unsigned int, nothrow)
+ {LibFunc::Znwm, {OpNewLike, 1, 0, -1}}, // new(unsigned long)
+ {LibFunc::ZnwmRKSt9nothrow_t, {MallocLike, 2, 0, -1}}, // new(unsigned long, nothrow)
+ {LibFunc::Znaj, {OpNewLike, 1, 0, -1}}, // new[](unsigned int)
+ {LibFunc::ZnajRKSt9nothrow_t, {MallocLike, 2, 0, -1}}, // new[](unsigned int, nothrow)
+ {LibFunc::Znam, {OpNewLike, 1, 0, -1}}, // new[](unsigned long)
+ {LibFunc::ZnamRKSt9nothrow_t, {MallocLike, 2, 0, -1}}, // new[](unsigned long, nothrow)
+ {LibFunc::msvc_new_int, {OpNewLike, 1, 0, -1}}, // new(unsigned int)
+ {LibFunc::msvc_new_int_nothrow, {MallocLike, 2, 0, -1}}, // new(unsigned int, nothrow)
+ {LibFunc::msvc_new_longlong, {OpNewLike, 1, 0, -1}}, // new(unsigned long long)
+ {LibFunc::msvc_new_longlong_nothrow, {MallocLike, 2, 0, -1}}, // new(unsigned long long, nothrow)
+ {LibFunc::msvc_new_array_int, {OpNewLike, 1, 0, -1}}, // new[](unsigned int)
+ {LibFunc::msvc_new_array_int_nothrow, {MallocLike, 2, 0, -1}}, // new[](unsigned int, nothrow)
+ {LibFunc::msvc_new_array_longlong, {OpNewLike, 1, 0, -1}}, // new[](unsigned long long)
+ {LibFunc::msvc_new_array_longlong_nothrow, {MallocLike, 2, 0, -1}}, // new[](unsigned long long, nothrow)
+ {LibFunc::calloc, {CallocLike, 2, 0, 1}},
+ {LibFunc::realloc, {ReallocLike, 2, 1, -1}},
+ {LibFunc::reallocf, {ReallocLike, 2, 1, -1}},
+ {LibFunc::strdup, {StrDupLike, 1, -1, -1}},
+ {LibFunc::strndup, {StrDupLike, 2, 1, -1}}
// TODO: Handle "int posix_memalign(void **, size_t, size_t)"
};
@@ -96,34 +95,57 @@ static Function *getCalledFunction(const Value *V, bool LookThroughBitCast) {
return Callee;
}
-/// \brief Returns the allocation data for the given value if it is a call to a
-/// known allocation function, and NULL otherwise.
-static const AllocFnsTy *getAllocationData(const Value *V, AllocType AllocTy,
- const TargetLibraryInfo *TLI,
- bool LookThroughBitCast = false) {
+/// Returns the allocation data for the given value if it's either a call to a
+/// known allocation function, or a call to a function with the allocsize
+/// attribute.
+static Optional<AllocFnsTy> getAllocationData(const Value *V, AllocType AllocTy,
+ const TargetLibraryInfo *TLI,
+ bool LookThroughBitCast = false) {
// Skip intrinsics
if (isa<IntrinsicInst>(V))
- return nullptr;
+ return None;
- Function *Callee = getCalledFunction(V, LookThroughBitCast);
+ const Function *Callee = getCalledFunction(V, LookThroughBitCast);
if (!Callee)
- return nullptr;
+ return None;
+
+ // If it has allocsize, we can skip checking if it's a known function.
+ //
+ // MallocLike is chosen here because allocsize makes no guarantees about the
+ // nullness of the result of the function, nor does it deal with strings, nor
+ // does it require that the memory returned is zeroed out.
+ LLVM_CONSTEXPR auto AllocSizeAllocTy = MallocLike;
+ if ((AllocTy & AllocSizeAllocTy) == AllocSizeAllocTy &&
+ Callee->hasFnAttribute(Attribute::AllocSize)) {
+ Attribute Attr = Callee->getFnAttribute(Attribute::AllocSize);
+ std::pair<unsigned, Optional<unsigned>> Args = Attr.getAllocSizeArgs();
+
+ AllocFnsTy Result;
+ Result.AllocTy = AllocSizeAllocTy;
+ Result.NumParams = Callee->getNumOperands();
+ Result.FstParam = Args.first;
+ Result.SndParam = Args.second.getValueOr(-1);
+ return Result;
+ }
// Make sure that the function is available.
StringRef FnName = Callee->getName();
LibFunc::Func TLIFn;
if (!TLI || !TLI->getLibFunc(FnName, TLIFn) || !TLI->has(TLIFn))
- return nullptr;
+ return None;
- const AllocFnsTy *FnData =
+ const auto *Iter =
std::find_if(std::begin(AllocationFnData), std::end(AllocationFnData),
- [TLIFn](const AllocFnsTy &Fn) { return Fn.Func == TLIFn; });
+ [TLIFn](const std::pair<LibFunc::Func, AllocFnsTy> &P) {
+ return P.first == TLIFn;
+ });
- if (FnData == std::end(AllocationFnData))
- return nullptr;
+ if (Iter == std::end(AllocationFnData))
+ return None;
+ const AllocFnsTy *FnData = &Iter->second;
if ((FnData->AllocTy & AllocTy) != FnData->AllocTy)
- return nullptr;
+ return None;
// Check function prototype.
int FstParam = FnData->FstParam;
@@ -138,13 +160,13 @@ static const AllocFnsTy *getAllocationData(const Value *V, AllocType AllocTy,
(SndParam < 0 ||
FTy->getParamType(SndParam)->isIntegerTy(32) ||
FTy->getParamType(SndParam)->isIntegerTy(64)))
- return FnData;
- return nullptr;
+ return *FnData;
+ return None;
}
static bool hasNoAliasAttr(const Value *V, bool LookThroughBitCast) {
ImmutableCallSite CS(LookThroughBitCast ? V->stripPointerCasts() : V);
- return CS && CS.hasFnAttr(Attribute::NoAlias);
+ return CS && CS.paramHasAttr(AttributeSet::ReturnIndex, Attribute::NoAlias);
}
@@ -153,7 +175,7 @@ static bool hasNoAliasAttr(const Value *V, bool LookThroughBitCast) {
/// like).
bool llvm::isAllocationFn(const Value *V, const TargetLibraryInfo *TLI,
bool LookThroughBitCast) {
- return getAllocationData(V, AnyAlloc, TLI, LookThroughBitCast);
+ return getAllocationData(V, AnyAlloc, TLI, LookThroughBitCast).hasValue();
}
/// \brief Tests if a value is a call or invoke to a function that returns a
@@ -170,21 +192,21 @@ bool llvm::isNoAliasFn(const Value *V, const TargetLibraryInfo *TLI,
/// allocates uninitialized memory (such as malloc).
bool llvm::isMallocLikeFn(const Value *V, const TargetLibraryInfo *TLI,
bool LookThroughBitCast) {
- return getAllocationData(V, MallocLike, TLI, LookThroughBitCast);
+ return getAllocationData(V, MallocLike, TLI, LookThroughBitCast).hasValue();
}
/// \brief Tests if a value is a call or invoke to a library function that
/// allocates zero-filled memory (such as calloc).
bool llvm::isCallocLikeFn(const Value *V, const TargetLibraryInfo *TLI,
bool LookThroughBitCast) {
- return getAllocationData(V, CallocLike, TLI, LookThroughBitCast);
+ return getAllocationData(V, CallocLike, TLI, LookThroughBitCast).hasValue();
}
/// \brief Tests if a value is a call or invoke to a library function that
/// allocates memory (either malloc, calloc, or strdup like).
bool llvm::isAllocLikeFn(const Value *V, const TargetLibraryInfo *TLI,
bool LookThroughBitCast) {
- return getAllocationData(V, AllocLike, TLI, LookThroughBitCast);
+ return getAllocationData(V, AllocLike, TLI, LookThroughBitCast).hasValue();
}
/// extractMallocCall - Returns the corresponding CallInst if the instruction
@@ -214,8 +236,7 @@ static Value *computeArraySize(const CallInst *CI, const DataLayout &DL,
// return the multiple. Otherwise, return NULL.
Value *MallocArg = CI->getArgOperand(0);
Value *Multiple = nullptr;
- if (ComputeMultiple(MallocArg, ElementSize, Multiple,
- LookThroughSExt))
+ if (ComputeMultiple(MallocArg, ElementSize, Multiple, LookThroughSExt))
return Multiple;
return nullptr;
@@ -345,29 +366,29 @@ const CallInst *llvm::isFreeCall(const Value *I, const TargetLibraryInfo *TLI) {
//===----------------------------------------------------------------------===//
// Utility functions to compute size of objects.
//
-
+static APInt getSizeWithOverflow(const SizeOffsetType &Data) {
+ if (Data.second.isNegative() || Data.first.ult(Data.second))
+ return APInt(Data.first.getBitWidth(), 0);
+ return Data.first - Data.second;
+}
/// \brief Compute the size of the object pointed by Ptr. Returns true and the
/// object size in Size if successful, and false otherwise.
/// If RoundToAlign is true, then Size is rounded up to the aligment of allocas,
/// byval arguments, and global variables.
bool llvm::getObjectSize(const Value *Ptr, uint64_t &Size, const DataLayout &DL,
- const TargetLibraryInfo *TLI, bool RoundToAlign) {
- ObjectSizeOffsetVisitor Visitor(DL, TLI, Ptr->getContext(), RoundToAlign);
+ const TargetLibraryInfo *TLI, bool RoundToAlign,
+ llvm::ObjSizeMode Mode) {
+ ObjectSizeOffsetVisitor Visitor(DL, TLI, Ptr->getContext(),
+ RoundToAlign, Mode);
SizeOffsetType Data = Visitor.compute(const_cast<Value*>(Ptr));
if (!Visitor.bothKnown(Data))
return false;
- APInt ObjSize = Data.first, Offset = Data.second;
- // check for overflow
- if (Offset.slt(0) || ObjSize.ult(Offset))
- Size = 0;
- else
- Size = (ObjSize - Offset).getZExtValue();
+ Size = getSizeWithOverflow(Data).getZExtValue();
return true;
}
-
STATISTIC(ObjectVisitorArgument,
"Number of arguments with unsolved size and offset");
STATISTIC(ObjectVisitorLoad,
@@ -376,15 +397,16 @@ STATISTIC(ObjectVisitorLoad,
APInt ObjectSizeOffsetVisitor::align(APInt Size, uint64_t Align) {
if (RoundToAlign && Align)
- return APInt(IntTyBits, RoundUpToAlignment(Size.getZExtValue(), Align));
+ return APInt(IntTyBits, alignTo(Size.getZExtValue(), Align));
return Size;
}
ObjectSizeOffsetVisitor::ObjectSizeOffsetVisitor(const DataLayout &DL,
const TargetLibraryInfo *TLI,
LLVMContext &Context,
- bool RoundToAlign)
- : DL(DL), TLI(TLI), RoundToAlign(RoundToAlign) {
+ bool RoundToAlign,
+ ObjSizeMode Mode)
+ : DL(DL), TLI(TLI), RoundToAlign(RoundToAlign), Mode(Mode) {
// Pointer size must be rechecked for each object visited since it could have
// a different address space.
}
@@ -443,7 +465,7 @@ SizeOffsetType ObjectSizeOffsetVisitor::visitAllocaInst(AllocaInst &I) {
}
SizeOffsetType ObjectSizeOffsetVisitor::visitArgument(Argument &A) {
- // no interprocedural analysis is done at the moment
+ // No interprocedural analysis is done at the moment.
if (!A.hasByValOrInAllocaAttr()) {
++ObjectVisitorArgument;
return unknown();
@@ -454,20 +476,21 @@ SizeOffsetType ObjectSizeOffsetVisitor::visitArgument(Argument &A) {
}
SizeOffsetType ObjectSizeOffsetVisitor::visitCallSite(CallSite CS) {
- const AllocFnsTy *FnData = getAllocationData(CS.getInstruction(), AnyAlloc,
- TLI);
+ Optional<AllocFnsTy> FnData =
+ getAllocationData(CS.getInstruction(), AnyAlloc, TLI);
if (!FnData)
return unknown();
- // handle strdup-like functions separately
+ // Handle strdup-like functions separately.
if (FnData->AllocTy == StrDupLike) {
APInt Size(IntTyBits, GetStringLength(CS.getArgument(0)));
if (!Size)
return unknown();
- // strndup limits strlen
+ // Strndup limits strlen.
if (FnData->FstParam > 0) {
- ConstantInt *Arg= dyn_cast<ConstantInt>(CS.getArgument(FnData->FstParam));
+ ConstantInt *Arg =
+ dyn_cast<ConstantInt>(CS.getArgument(FnData->FstParam));
if (!Arg)
return unknown();
@@ -482,8 +505,26 @@ SizeOffsetType ObjectSizeOffsetVisitor::visitCallSite(CallSite CS) {
if (!Arg)
return unknown();
- APInt Size = Arg->getValue().zextOrSelf(IntTyBits);
- // size determined by just 1 parameter
+ // When we're compiling N-bit code, and the user uses parameters that are
+ // greater than N bits (e.g. uint64_t on a 32-bit build), we can run into
+ // trouble with APInt size issues. This function handles resizing + overflow
+ // checks for us.
+ auto CheckedZextOrTrunc = [&](APInt &I) {
+ // More bits than we can handle. Checking the bit width isn't necessary, but
+ // it's faster than checking active bits, and should give `false` in the
+ // vast majority of cases.
+ if (I.getBitWidth() > IntTyBits && I.getActiveBits() > IntTyBits)
+ return false;
+ if (I.getBitWidth() != IntTyBits)
+ I = I.zextOrTrunc(IntTyBits);
+ return true;
+ };
+
+ APInt Size = Arg->getValue();
+ if (!CheckedZextOrTrunc(Size))
+ return unknown();
+
+ // Size is determined by just 1 parameter.
if (FnData->SndParam < 0)
return std::make_pair(Size, Zero);
@@ -491,8 +532,13 @@ SizeOffsetType ObjectSizeOffsetVisitor::visitCallSite(CallSite CS) {
if (!Arg)
return unknown();
- Size *= Arg->getValue().zextOrSelf(IntTyBits);
- return std::make_pair(Size, Zero);
+ APInt NumElems = Arg->getValue();
+ if (!CheckedZextOrTrunc(NumElems))
+ return unknown();
+
+ bool Overflow;
+ Size = Size.umul_ov(NumElems, Overflow);
+ return Overflow ? unknown() : std::make_pair(Size, Zero);
// TODO: handle more standard functions (+ wchar cousins):
// - strdup / strndup
@@ -529,7 +575,7 @@ SizeOffsetType ObjectSizeOffsetVisitor::visitGEPOperator(GEPOperator &GEP) {
}
SizeOffsetType ObjectSizeOffsetVisitor::visitGlobalAlias(GlobalAlias &GA) {
- if (GA.mayBeOverridden())
+ if (GA.isInterposable())
return unknown();
return compute(GA.getAliasee());
}
@@ -560,8 +606,28 @@ SizeOffsetType ObjectSizeOffsetVisitor::visitPHINode(PHINode&) {
SizeOffsetType ObjectSizeOffsetVisitor::visitSelectInst(SelectInst &I) {
SizeOffsetType TrueSide = compute(I.getTrueValue());
SizeOffsetType FalseSide = compute(I.getFalseValue());
- if (bothKnown(TrueSide) && bothKnown(FalseSide) && TrueSide == FalseSide)
- return TrueSide;
+ if (bothKnown(TrueSide) && bothKnown(FalseSide)) {
+ if (TrueSide == FalseSide) {
+ return TrueSide;
+ }
+
+ APInt TrueResult = getSizeWithOverflow(TrueSide);
+ APInt FalseResult = getSizeWithOverflow(FalseSide);
+
+ if (TrueResult == FalseResult) {
+ return TrueSide;
+ }
+ if (Mode == ObjSizeMode::Min) {
+ if (TrueResult.slt(FalseResult))
+ return TrueSide;
+ return FalseSide;
+ }
+ if (Mode == ObjSizeMode::Max) {
+ if (TrueResult.sgt(FalseResult))
+ return TrueSide;
+ return FalseSide;
+ }
+ }
return unknown();
}
@@ -591,11 +657,11 @@ SizeOffsetEvalType ObjectSizeOffsetEvaluator::compute(Value *V) {
SizeOffsetEvalType Result = compute_(V);
if (!bothKnown(Result)) {
- // erase everything that was computed in this iteration from the cache, so
+ // Erase everything that was computed in this iteration from the cache, so
// that no dangling references are left behind. We could be a bit smarter if
// we kept a dependency graph. It's probably not worth the complexity.
- for (PtrSetTy::iterator I=SeenVals.begin(), E=SeenVals.end(); I != E; ++I) {
- CacheMapTy::iterator CacheIt = CacheMap.find(*I);
+ for (const Value *SeenVal : SeenVals) {
+ CacheMapTy::iterator CacheIt = CacheMap.find(SeenVal);
// non-computable results can be safely cached
if (CacheIt != CacheMap.end() && anyKnown(CacheIt->second))
CacheMap.erase(CacheIt);
@@ -615,18 +681,18 @@ SizeOffsetEvalType ObjectSizeOffsetEvaluator::compute_(Value *V) {
V = V->stripPointerCasts();
- // check cache
+ // Check cache.
CacheMapTy::iterator CacheIt = CacheMap.find(V);
if (CacheIt != CacheMap.end())
return CacheIt->second;
- // always generate code immediately before the instruction being
- // processed, so that the generated code dominates the same BBs
+ // Always generate code immediately before the instruction being
+ // processed, so that the generated code dominates the same BBs.
BuilderTy::InsertPointGuard Guard(Builder);
if (Instruction *I = dyn_cast<Instruction>(V))
Builder.SetInsertPoint(I);
- // now compute the size and offset
+ // Now compute the size and offset.
SizeOffsetEvalType Result;
// Record the pointers that were handled in this run, so that they can be
@@ -643,7 +709,7 @@ SizeOffsetEvalType ObjectSizeOffsetEvaluator::compute_(Value *V) {
cast<ConstantExpr>(V)->getOpcode() == Instruction::IntToPtr) ||
isa<GlobalAlias>(V) ||
isa<GlobalVariable>(V)) {
- // ignore values where we cannot do more than what ObjectSizeVisitor can
+ // Ignore values where we cannot do more than ObjectSizeVisitor.
Result = unknown();
} else {
DEBUG(dbgs() << "ObjectSizeOffsetEvaluator::compute() unhandled value: "
@@ -670,12 +736,12 @@ SizeOffsetEvalType ObjectSizeOffsetEvaluator::visitAllocaInst(AllocaInst &I) {
}
SizeOffsetEvalType ObjectSizeOffsetEvaluator::visitCallSite(CallSite CS) {
- const AllocFnsTy *FnData = getAllocationData(CS.getInstruction(), AnyAlloc,
- TLI);
+ Optional<AllocFnsTy> FnData =
+ getAllocationData(CS.getInstruction(), AnyAlloc, TLI);
if (!FnData)
return unknown();
- // handle strdup-like functions separately
+ // Handle strdup-like functions separately.
if (FnData->AllocTy == StrDupLike) {
// TODO
return unknown();
@@ -731,14 +797,14 @@ SizeOffsetEvalType ObjectSizeOffsetEvaluator::visitLoadInst(LoadInst&) {
}
SizeOffsetEvalType ObjectSizeOffsetEvaluator::visitPHINode(PHINode &PHI) {
- // create 2 PHIs: one for size and another for offset
+ // Create 2 PHIs: one for size and another for offset.
PHINode *SizePHI = Builder.CreatePHI(IntTy, PHI.getNumIncomingValues());
PHINode *OffsetPHI = Builder.CreatePHI(IntTy, PHI.getNumIncomingValues());
- // insert right away in the cache to handle recursive PHIs
+ // Insert right away in the cache to handle recursive PHIs.
CacheMap[&PHI] = std::make_pair(SizePHI, OffsetPHI);
- // compute offset/size for each PHI incoming pointer
+ // Compute offset/size for each PHI incoming pointer.
for (unsigned i = 0, e = PHI.getNumIncomingValues(); i != e; ++i) {
Builder.SetInsertPoint(&*PHI.getIncomingBlock(i)->getFirstInsertionPt());
SizeOffsetEvalType EdgeData = compute_(PHI.getIncomingValue(i));
OpenPOWER on IntegriCloud