diff options
author | dim <dim@FreeBSD.org> | 2016-12-26 20:36:37 +0000 |
---|---|---|
committer | dim <dim@FreeBSD.org> | 2016-12-26 20:36:37 +0000 |
commit | 06210ae42d418d50d8d9365d5c9419308ae9e7ee (patch) | |
tree | ab60b4cdd6e430dda1f292a46a77ddb744723f31 /contrib/llvm/lib/CodeGen/StackProtector.cpp | |
parent | 2dd166267f53df1c3748b4325d294b9b839de74b (diff) | |
download | FreeBSD-src-06210ae42d418d50d8d9365d5c9419308ae9e7ee.zip FreeBSD-src-06210ae42d418d50d8d9365d5c9419308ae9e7ee.tar.gz |
MFC r309124:
Upgrade our copies of clang, llvm, lldb, compiler-rt and libc++ to 3.9.0
release, and add lld 3.9.0. Also completely revamp the build system for
clang, llvm, lldb and their related tools.
Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11
support to build; see UPDATING for more information.
Release notes for llvm, clang and lld are available here:
<http://llvm.org/releases/3.9.0/docs/ReleaseNotes.html>
<http://llvm.org/releases/3.9.0/tools/clang/docs/ReleaseNotes.html>
<http://llvm.org/releases/3.9.0/tools/lld/docs/ReleaseNotes.html>
Thanks to Ed Maste, Bryan Drewery, Andrew Turner, Antoine Brodin and Jan
Beich for their help.
Relnotes: yes
MFC r309147:
Pull in r282174 from upstream llvm trunk (by Krzysztof Parzyszek):
[PPC] Set SP after loading data from stack frame, if no red zone is
present
Follow-up to r280705: Make sure that the SP is only restored after
all data is loaded from the stack frame, if there is no red zone.
This completes the fix for
https://llvm.org/bugs/show_bug.cgi?id=26519.
Differential Revision: https://reviews.llvm.org/D24466
Reported by: Mark Millard
PR: 214433
MFC r309149:
Pull in r283060 from upstream llvm trunk (by Hal Finkel):
[PowerPC] Refactor soft-float support, and enable PPC64 soft float
This change enables soft-float for PowerPC64, and also makes
soft-float disable all vector instruction sets for both 32-bit and
64-bit modes. This latter part is necessary because the PPC backend
canonicalizes many Altivec vector types to floating-point types, and
so soft-float breaks scalarization support for many operations. Both
for embedded targets and for operating-system kernels desiring
soft-float support, it seems reasonable that disabling hardware
floating-point also disables vector instructions (embedded targets
without hardware floating point support are unlikely to have Altivec,
etc. and operating system kernels desiring not to use floating-point
registers to lower syscall cost are unlikely to want to use vector
registers either). If someone needs this to work, we'll need to
change the fact that we promote many Altivec operations to act on
v4f32. To make it possible to disable Altivec when soft-float is
enabled, hardware floating-point support needs to be expressed as a
positive feature, like the others, and not a negative feature,
because target features cannot have dependencies on the disabling of
some other feature. So +soft-float has now become -hard-float.
Fixes PR26970.
Pull in r283061 from upstream clang trunk (by Hal Finkel):
[PowerPC] Enable soft-float for PPC64, and +soft-float -> -hard-float
Enable soft-float support on PPC64, as the backend now supports it.
Also, the backend now uses -hard-float instead of +soft-float, so set
the target features accordingly.
Fixes PR26970.
Reported by: Mark Millard
PR: 214433
MFC r309212:
Add a few missed clang 3.9.0 files to OptionalObsoleteFiles.
MFC r309262:
Fix packaging for clang, lldb and lld 3.9.0
During the upgrade of clang/llvm etc to 3.9.0 in r309124, the PACKAGE
directive in the usr.bin/clang/*.mk files got dropped accidentally.
Restore it, with a few minor changes and additions:
* Correct license in clang.ucl to NCSA
* Add PACKAGE=clang for clang and most of the "ll" tools
* Put lldb in its own package
* Put lld in its own package
Reviewed by: gjb, jmallett
Differential Revision: https://reviews.freebsd.org/D8666
MFC r309656:
During the bootstrap phase, when building the minimal llvm library on
PowerPC, add lib/Support/Atomic.cpp. This is needed because upstream
llvm revision r271821 disabled the use of std::call_once, which causes
some fallback functions from Atomic.cpp to be used instead.
Reported by: Mark Millard
PR: 214902
MFC r309835:
Tentatively apply https://reviews.llvm.org/D18730 to work around gcc PR
70528 (bogus error: constructor required before non-static data member).
This should fix buildworld with the external gcc package.
Reported by: https://jenkins.freebsd.org/job/FreeBSD_HEAD_amd64_gcc/
MFC r310194:
Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
3.9.1 release.
Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11
support to build; see UPDATING for more information.
Release notes for llvm, clang and lld will be available here:
<http://releases.llvm.org/3.9.1/docs/ReleaseNotes.html>
<http://releases.llvm.org/3.9.1/tools/clang/docs/ReleaseNotes.html>
<http://releases.llvm.org/3.9.1/tools/lld/docs/ReleaseNotes.html>
Relnotes: yes
Diffstat (limited to 'contrib/llvm/lib/CodeGen/StackProtector.cpp')
-rw-r--r-- | contrib/llvm/lib/CodeGen/StackProtector.cpp | 204 |
1 files changed, 91 insertions, 113 deletions
diff --git a/contrib/llvm/lib/CodeGen/StackProtector.cpp b/contrib/llvm/lib/CodeGen/StackProtector.cpp index db3fef5..89868e4 100644 --- a/contrib/llvm/lib/CodeGen/StackProtector.cpp +++ b/contrib/llvm/lib/CodeGen/StackProtector.cpp @@ -18,12 +18,13 @@ #include "llvm/ADT/SmallPtrSet.h" #include "llvm/ADT/Statistic.h" #include "llvm/Analysis/BranchProbabilityInfo.h" +#include "llvm/Analysis/EHPersonalities.h" #include "llvm/Analysis/ValueTracking.h" -#include "llvm/CodeGen/Analysis.h" #include "llvm/CodeGen/Passes.h" #include "llvm/IR/Attributes.h" #include "llvm/IR/Constants.h" #include "llvm/IR/DataLayout.h" +#include "llvm/IR/DebugInfo.h" #include "llvm/IR/DerivedTypes.h" #include "llvm/IR/Function.h" #include "llvm/IR/GlobalValue.h" @@ -89,15 +90,25 @@ bool StackProtector::runOnFunction(Function &Fn) { getAnalysisIfAvailable<DominatorTreeWrapperPass>(); DT = DTWP ? &DTWP->getDomTree() : nullptr; TLI = TM->getSubtargetImpl(Fn)->getTargetLowering(); + HasPrologue = false; + HasIRCheck = false; Attribute Attr = Fn.getFnAttribute("stack-protector-buffer-size"); if (Attr.isStringAttribute() && Attr.getValueAsString().getAsInteger(10, SSPBufferSize)) - return false; // Invalid integer string + return false; // Invalid integer string if (!RequiresStackProtector()) return false; + // TODO(etienneb): Functions with funclets are not correctly supported now. + // Do nothing if this is funclet-based personality. + if (Fn.hasPersonalityFn()) { + EHPersonality Personality = classifyEHPersonality(Fn.getPersonalityFn()); + if (isFuncletEHPersonality(Personality)) + return false; + } + ++NumFunProtected; return InsertStackProtectors(); } @@ -200,11 +211,24 @@ bool StackProtector::HasAddressTaken(const Instruction *AI) { bool StackProtector::RequiresStackProtector() { bool Strong = false; bool NeedsProtector = false; + for (const BasicBlock &BB : *F) + for (const Instruction &I : BB) + if (const CallInst *CI = dyn_cast<CallInst>(&I)) + if (CI->getCalledFunction() == + Intrinsic::getDeclaration(F->getParent(), + Intrinsic::stackprotector)) + HasPrologue = true; + + if (F->hasFnAttribute(Attribute::SafeStack)) + return false; + if (F->hasFnAttribute(Attribute::StackProtectReq)) { NeedsProtector = true; Strong = true; // Use the same heuristic as strong to determine SSPLayout } else if (F->hasFnAttribute(Attribute::StackProtectStrong)) Strong = true; + else if (HasPrologue) + NeedsProtector = true; else if (!F->hasFnAttribute(Attribute::StackProtect)) return false; @@ -256,106 +280,51 @@ bool StackProtector::RequiresStackProtector() { return NeedsProtector; } -static bool InstructionWillNotHaveChain(const Instruction *I) { - return !I->mayHaveSideEffects() && !I->mayReadFromMemory() && - isSafeToSpeculativelyExecute(I); -} - -/// Identify if RI has a previous instruction in the "Tail Position" and return -/// it. Otherwise return 0. -/// -/// This is based off of the code in llvm::isInTailCallPosition. The difference -/// is that it inverts the first part of llvm::isInTailCallPosition since -/// isInTailCallPosition is checking if a call is in a tail call position, and -/// we are searching for an unknown tail call that might be in the tail call -/// position. Once we find the call though, the code uses the same refactored -/// code, returnTypeIsEligibleForTailCall. -static CallInst *FindPotentialTailCall(BasicBlock *BB, ReturnInst *RI, - const TargetLoweringBase *TLI) { - // Establish a reasonable upper bound on the maximum amount of instructions we - // will look through to find a tail call. - unsigned SearchCounter = 0; - const unsigned MaxSearch = 4; - bool NoInterposingChain = true; - - for (BasicBlock::reverse_iterator I = std::next(BB->rbegin()), E = BB->rend(); - I != E && SearchCounter < MaxSearch; ++I) { - Instruction *Inst = &*I; - - // Skip over debug intrinsics and do not allow them to affect our MaxSearch - // counter. - if (isa<DbgInfoIntrinsic>(Inst)) - continue; - - // If we find a call and the following conditions are satisifed, then we - // have found a tail call that satisfies at least the target independent - // requirements of a tail call: - // - // 1. The call site has the tail marker. - // - // 2. The call site either will not cause the creation of a chain or if a - // chain is necessary there are no instructions in between the callsite and - // the call which would create an interposing chain. - // - // 3. The return type of the function does not impede tail call - // optimization. - if (CallInst *CI = dyn_cast<CallInst>(Inst)) { - if (CI->isTailCall() && - (InstructionWillNotHaveChain(CI) || NoInterposingChain) && - returnTypeIsEligibleForTailCall(BB->getParent(), CI, RI, *TLI)) - return CI; - } - - // If we did not find a call see if we have an instruction that may create - // an interposing chain. - NoInterposingChain = - NoInterposingChain && InstructionWillNotHaveChain(Inst); - - // Increment max search. - SearchCounter++; - } - - return nullptr; +/// Create a stack guard loading and populate whether SelectionDAG SSP is +/// supported. +static Value *getStackGuard(const TargetLoweringBase *TLI, Module *M, + IRBuilder<> &B, + bool *SupportsSelectionDAGSP = nullptr) { + if (Value *Guard = TLI->getIRStackGuard(B)) + return B.CreateLoad(Guard, true, "StackGuard"); + + // Use SelectionDAG SSP handling, since there isn't an IR guard. + // + // This is more or less weird, since we optionally output whether we + // should perform a SelectionDAG SP here. The reason is that it's strictly + // defined as !TLI->getIRStackGuard(B), where getIRStackGuard is also + // mutating. There is no way to get this bit without mutating the IR, so + // getting this bit has to happen in this right time. + // + // We could have define a new function TLI::supportsSelectionDAGSP(), but that + // will put more burden on the backends' overriding work, especially when it + // actually conveys the same information getIRStackGuard() already gives. + if (SupportsSelectionDAGSP) + *SupportsSelectionDAGSP = true; + TLI->insertSSPDeclarations(*M); + return B.CreateCall(Intrinsic::getDeclaration(M, Intrinsic::stackguard)); } -/// Insert code into the entry block that stores the __stack_chk_guard +/// Insert code into the entry block that stores the stack guard /// variable onto the stack: /// /// entry: /// StackGuardSlot = alloca i8* -/// StackGuard = load __stack_chk_guard -/// call void @llvm.stackprotect.create(StackGuard, StackGuardSlot) +/// StackGuard = <stack guard> +/// call void @llvm.stackprotector(StackGuard, StackGuardSlot) /// /// Returns true if the platform/triple supports the stackprotectorcreate pseudo /// node. static bool CreatePrologue(Function *F, Module *M, ReturnInst *RI, - const TargetLoweringBase *TLI, const Triple &TT, - AllocaInst *&AI, Value *&StackGuardVar) { + const TargetLoweringBase *TLI, AllocaInst *&AI) { bool SupportsSelectionDAGSP = false; - PointerType *PtrTy = Type::getInt8PtrTy(RI->getContext()); - unsigned AddressSpace, Offset; - if (TLI->getStackCookieLocation(AddressSpace, Offset)) { - Constant *OffsetVal = - ConstantInt::get(Type::getInt32Ty(RI->getContext()), Offset); - - StackGuardVar = - ConstantExpr::getIntToPtr(OffsetVal, PointerType::get(PtrTy, - AddressSpace)); - } else if (TT.isOSOpenBSD()) { - StackGuardVar = M->getOrInsertGlobal("__guard_local", PtrTy); - cast<GlobalValue>(StackGuardVar) - ->setVisibility(GlobalValue::HiddenVisibility); - } else { - SupportsSelectionDAGSP = true; - StackGuardVar = M->getOrInsertGlobal("__stack_chk_guard", PtrTy); - } - IRBuilder<> B(&F->getEntryBlock().front()); + PointerType *PtrTy = Type::getInt8PtrTy(RI->getContext()); AI = B.CreateAlloca(PtrTy, nullptr, "StackGuardSlot"); - LoadInst *LI = B.CreateLoad(StackGuardVar, "StackGuard"); - B.CreateCall(Intrinsic::getDeclaration(M, Intrinsic::stackprotector), - {LI, AI}); + Value *GuardSlot = getStackGuard(TLI, M, B, &SupportsSelectionDAGSP); + B.CreateCall(Intrinsic::getDeclaration(M, Intrinsic::stackprotector), + {GuardSlot, AI}); return SupportsSelectionDAGSP; } @@ -366,11 +335,9 @@ static bool CreatePrologue(Function *F, Module *M, ReturnInst *RI, /// - The epilogue checks the value stored in the prologue against the original /// value. It calls __stack_chk_fail if they differ. bool StackProtector::InsertStackProtectors() { - bool HasPrologue = false; bool SupportsSelectionDAGSP = EnableSelectionDAGSP && !TM->Options.EnableFastISel; AllocaInst *AI = nullptr; // Place on stack that stores the stack guard. - Value *StackGuardVar = nullptr; // The stack guard variable. for (Function::iterator I = F->begin(), E = F->end(); I != E;) { BasicBlock *BB = &*I++; @@ -378,30 +345,36 @@ bool StackProtector::InsertStackProtectors() { if (!RI) continue; + // Generate prologue instrumentation if not already generated. if (!HasPrologue) { HasPrologue = true; - SupportsSelectionDAGSP &= - CreatePrologue(F, M, RI, TLI, Trip, AI, StackGuardVar); + SupportsSelectionDAGSP &= CreatePrologue(F, M, RI, TLI, AI); } - if (SupportsSelectionDAGSP) { - // Since we have a potential tail call, insert the special stack check - // intrinsic. - Instruction *InsertionPt = nullptr; - if (CallInst *CI = FindPotentialTailCall(BB, RI, TLI)) { - InsertionPt = CI; - } else { - InsertionPt = RI; - // At this point we know that BB has a return statement so it *DOES* - // have a terminator. - assert(InsertionPt != nullptr && - "BB must have a terminator instruction at this point."); - } - - Function *Intrinsic = - Intrinsic::getDeclaration(M, Intrinsic::stackprotectorcheck); - CallInst::Create(Intrinsic, StackGuardVar, "", InsertionPt); + // SelectionDAG based code generation. Nothing else needs to be done here. + // The epilogue instrumentation is postponed to SelectionDAG. + if (SupportsSelectionDAGSP) + break; + + // Set HasIRCheck to true, so that SelectionDAG will not generate its own + // version. SelectionDAG called 'shouldEmitSDCheck' to check whether + // instrumentation has already been generated. + HasIRCheck = true; + + // Generate epilogue instrumentation. The epilogue intrumentation can be + // function-based or inlined depending on which mechanism the target is + // providing. + if (Value* GuardCheck = TLI->getSSPStackGuardCheck(*M)) { + // Generate the function-based epilogue instrumentation. + // The target provides a guard check function, generate a call to it. + IRBuilder<> B(RI); + LoadInst *Guard = B.CreateLoad(AI, true, "Guard"); + CallInst *Call = B.CreateCall(GuardCheck, {Guard}); + llvm::Function *Function = cast<llvm::Function>(GuardCheck); + Call->setAttributes(Function->getAttributes()); + Call->setCallingConv(Function->getCallingConv()); } else { + // Generate the epilogue with inline instrumentation. // If we do not support SelectionDAG based tail calls, generate IR level // tail calls. // @@ -415,7 +388,7 @@ bool StackProtector::InsertStackProtectors() { // // return: // ... - // %1 = load __stack_chk_guard + // %1 = <stack guard> // %2 = load StackGuardSlot // %3 = cmp i1 %1, %2 // br i1 %3, label %SP_return, label %CallStackCheckFailBlk @@ -450,9 +423,9 @@ bool StackProtector::InsertStackProtectors() { // Generate the stack protector instructions in the old basic block. IRBuilder<> B(BB); - LoadInst *LI1 = B.CreateLoad(StackGuardVar); - LoadInst *LI2 = B.CreateLoad(AI); - Value *Cmp = B.CreateICmpEQ(LI1, LI2); + Value *Guard = getStackGuard(TLI, M, B); + LoadInst *LI2 = B.CreateLoad(AI, true); + Value *Cmp = B.CreateICmpEQ(Guard, LI2); auto SuccessProb = BranchProbabilityInfo::getBranchProbStackProtector(true); auto FailureProb = @@ -475,6 +448,7 @@ BasicBlock *StackProtector::CreateFailBB() { LLVMContext &Context = F->getContext(); BasicBlock *FailBB = BasicBlock::Create(Context, "CallStackCheckFailBlk", F); IRBuilder<> B(FailBB); + B.SetCurrentDebugLocation(DebugLoc::get(0, 0, F->getSubprogram())); if (Trip.isOSOpenBSD()) { Constant *StackChkFail = M->getOrInsertFunction("__stack_smash_handler", @@ -491,3 +465,7 @@ BasicBlock *StackProtector::CreateFailBB() { B.CreateUnreachable(); return FailBB; } + +bool StackProtector::shouldEmitSDCheck(const BasicBlock &BB) const { + return HasPrologue && !HasIRCheck && dyn_cast<ReturnInst>(BB.getTerminator()); +} |