diff options
author | dim <dim@FreeBSD.org> | 2016-12-26 20:36:37 +0000 |
---|---|---|
committer | dim <dim@FreeBSD.org> | 2016-12-26 20:36:37 +0000 |
commit | 06210ae42d418d50d8d9365d5c9419308ae9e7ee (patch) | |
tree | ab60b4cdd6e430dda1f292a46a77ddb744723f31 /contrib/llvm/lib/CodeGen/SafeStack.cpp | |
parent | 2dd166267f53df1c3748b4325d294b9b839de74b (diff) | |
download | FreeBSD-src-06210ae42d418d50d8d9365d5c9419308ae9e7ee.zip FreeBSD-src-06210ae42d418d50d8d9365d5c9419308ae9e7ee.tar.gz |
MFC r309124:
Upgrade our copies of clang, llvm, lldb, compiler-rt and libc++ to 3.9.0
release, and add lld 3.9.0. Also completely revamp the build system for
clang, llvm, lldb and their related tools.
Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11
support to build; see UPDATING for more information.
Release notes for llvm, clang and lld are available here:
<http://llvm.org/releases/3.9.0/docs/ReleaseNotes.html>
<http://llvm.org/releases/3.9.0/tools/clang/docs/ReleaseNotes.html>
<http://llvm.org/releases/3.9.0/tools/lld/docs/ReleaseNotes.html>
Thanks to Ed Maste, Bryan Drewery, Andrew Turner, Antoine Brodin and Jan
Beich for their help.
Relnotes: yes
MFC r309147:
Pull in r282174 from upstream llvm trunk (by Krzysztof Parzyszek):
[PPC] Set SP after loading data from stack frame, if no red zone is
present
Follow-up to r280705: Make sure that the SP is only restored after
all data is loaded from the stack frame, if there is no red zone.
This completes the fix for
https://llvm.org/bugs/show_bug.cgi?id=26519.
Differential Revision: https://reviews.llvm.org/D24466
Reported by: Mark Millard
PR: 214433
MFC r309149:
Pull in r283060 from upstream llvm trunk (by Hal Finkel):
[PowerPC] Refactor soft-float support, and enable PPC64 soft float
This change enables soft-float for PowerPC64, and also makes
soft-float disable all vector instruction sets for both 32-bit and
64-bit modes. This latter part is necessary because the PPC backend
canonicalizes many Altivec vector types to floating-point types, and
so soft-float breaks scalarization support for many operations. Both
for embedded targets and for operating-system kernels desiring
soft-float support, it seems reasonable that disabling hardware
floating-point also disables vector instructions (embedded targets
without hardware floating point support are unlikely to have Altivec,
etc. and operating system kernels desiring not to use floating-point
registers to lower syscall cost are unlikely to want to use vector
registers either). If someone needs this to work, we'll need to
change the fact that we promote many Altivec operations to act on
v4f32. To make it possible to disable Altivec when soft-float is
enabled, hardware floating-point support needs to be expressed as a
positive feature, like the others, and not a negative feature,
because target features cannot have dependencies on the disabling of
some other feature. So +soft-float has now become -hard-float.
Fixes PR26970.
Pull in r283061 from upstream clang trunk (by Hal Finkel):
[PowerPC] Enable soft-float for PPC64, and +soft-float -> -hard-float
Enable soft-float support on PPC64, as the backend now supports it.
Also, the backend now uses -hard-float instead of +soft-float, so set
the target features accordingly.
Fixes PR26970.
Reported by: Mark Millard
PR: 214433
MFC r309212:
Add a few missed clang 3.9.0 files to OptionalObsoleteFiles.
MFC r309262:
Fix packaging for clang, lldb and lld 3.9.0
During the upgrade of clang/llvm etc to 3.9.0 in r309124, the PACKAGE
directive in the usr.bin/clang/*.mk files got dropped accidentally.
Restore it, with a few minor changes and additions:
* Correct license in clang.ucl to NCSA
* Add PACKAGE=clang for clang and most of the "ll" tools
* Put lldb in its own package
* Put lld in its own package
Reviewed by: gjb, jmallett
Differential Revision: https://reviews.freebsd.org/D8666
MFC r309656:
During the bootstrap phase, when building the minimal llvm library on
PowerPC, add lib/Support/Atomic.cpp. This is needed because upstream
llvm revision r271821 disabled the use of std::call_once, which causes
some fallback functions from Atomic.cpp to be used instead.
Reported by: Mark Millard
PR: 214902
MFC r309835:
Tentatively apply https://reviews.llvm.org/D18730 to work around gcc PR
70528 (bogus error: constructor required before non-static data member).
This should fix buildworld with the external gcc package.
Reported by: https://jenkins.freebsd.org/job/FreeBSD_HEAD_amd64_gcc/
MFC r310194:
Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
3.9.1 release.
Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11
support to build; see UPDATING for more information.
Release notes for llvm, clang and lld will be available here:
<http://releases.llvm.org/3.9.1/docs/ReleaseNotes.html>
<http://releases.llvm.org/3.9.1/tools/clang/docs/ReleaseNotes.html>
<http://releases.llvm.org/3.9.1/tools/lld/docs/ReleaseNotes.html>
Relnotes: yes
Diffstat (limited to 'contrib/llvm/lib/CodeGen/SafeStack.cpp')
-rw-r--r-- | contrib/llvm/lib/CodeGen/SafeStack.cpp | 851 |
1 files changed, 851 insertions, 0 deletions
diff --git a/contrib/llvm/lib/CodeGen/SafeStack.cpp b/contrib/llvm/lib/CodeGen/SafeStack.cpp new file mode 100644 index 0000000..4a1b995 --- /dev/null +++ b/contrib/llvm/lib/CodeGen/SafeStack.cpp @@ -0,0 +1,851 @@ +//===-- SafeStack.cpp - Safe Stack Insertion ------------------------------===// +// +// The LLVM Compiler Infrastructure +// +// This file is distributed under the University of Illinois Open Source +// License. See LICENSE.TXT for details. +// +//===----------------------------------------------------------------------===// +// +// This pass splits the stack into the safe stack (kept as-is for LLVM backend) +// and the unsafe stack (explicitly allocated and managed through the runtime +// support library). +// +// http://clang.llvm.org/docs/SafeStack.html +// +//===----------------------------------------------------------------------===// + +#include "SafeStackColoring.h" +#include "SafeStackLayout.h" +#include "llvm/ADT/Statistic.h" +#include "llvm/ADT/Triple.h" +#include "llvm/Analysis/BranchProbabilityInfo.h" +#include "llvm/Analysis/ScalarEvolution.h" +#include "llvm/Analysis/ScalarEvolutionExpressions.h" +#include "llvm/CodeGen/Passes.h" +#include "llvm/IR/Constants.h" +#include "llvm/IR/DIBuilder.h" +#include "llvm/IR/DataLayout.h" +#include "llvm/IR/DerivedTypes.h" +#include "llvm/IR/Function.h" +#include "llvm/IR/IRBuilder.h" +#include "llvm/IR/InstIterator.h" +#include "llvm/IR/Instructions.h" +#include "llvm/IR/IntrinsicInst.h" +#include "llvm/IR/Intrinsics.h" +#include "llvm/IR/MDBuilder.h" +#include "llvm/IR/Module.h" +#include "llvm/Pass.h" +#include "llvm/Support/CommandLine.h" +#include "llvm/Support/Debug.h" +#include "llvm/Support/Format.h" +#include "llvm/Support/MathExtras.h" +#include "llvm/Support/raw_os_ostream.h" +#include "llvm/Target/TargetLowering.h" +#include "llvm/Target/TargetSubtargetInfo.h" +#include "llvm/Transforms/Utils/BasicBlockUtils.h" +#include "llvm/Transforms/Utils/Local.h" +#include "llvm/Transforms/Utils/ModuleUtils.h" + +using namespace llvm; +using namespace llvm::safestack; + +#define DEBUG_TYPE "safestack" + +enum UnsafeStackPtrStorageVal { ThreadLocalUSP, SingleThreadUSP }; + +static cl::opt<UnsafeStackPtrStorageVal> USPStorage("safe-stack-usp-storage", + cl::Hidden, cl::init(ThreadLocalUSP), + cl::desc("Type of storage for the unsafe stack pointer"), + cl::values(clEnumValN(ThreadLocalUSP, "thread-local", + "Thread-local storage"), + clEnumValN(SingleThreadUSP, "single-thread", + "Non-thread-local storage"), + clEnumValEnd)); + +namespace llvm { + +STATISTIC(NumFunctions, "Total number of functions"); +STATISTIC(NumUnsafeStackFunctions, "Number of functions with unsafe stack"); +STATISTIC(NumUnsafeStackRestorePointsFunctions, + "Number of functions that use setjmp or exceptions"); + +STATISTIC(NumAllocas, "Total number of allocas"); +STATISTIC(NumUnsafeStaticAllocas, "Number of unsafe static allocas"); +STATISTIC(NumUnsafeDynamicAllocas, "Number of unsafe dynamic allocas"); +STATISTIC(NumUnsafeByValArguments, "Number of unsafe byval arguments"); +STATISTIC(NumUnsafeStackRestorePoints, "Number of setjmps and landingpads"); + +} // namespace llvm + +namespace { + +/// Rewrite an SCEV expression for a memory access address to an expression that +/// represents offset from the given alloca. +/// +/// The implementation simply replaces all mentions of the alloca with zero. +class AllocaOffsetRewriter : public SCEVRewriteVisitor<AllocaOffsetRewriter> { + const Value *AllocaPtr; + +public: + AllocaOffsetRewriter(ScalarEvolution &SE, const Value *AllocaPtr) + : SCEVRewriteVisitor(SE), AllocaPtr(AllocaPtr) {} + + const SCEV *visitUnknown(const SCEVUnknown *Expr) { + if (Expr->getValue() == AllocaPtr) + return SE.getZero(Expr->getType()); + return Expr; + } +}; + +/// The SafeStack pass splits the stack of each function into the safe +/// stack, which is only accessed through memory safe dereferences (as +/// determined statically), and the unsafe stack, which contains all +/// local variables that are accessed in ways that we can't prove to +/// be safe. +class SafeStack : public FunctionPass { + const TargetMachine *TM; + const TargetLoweringBase *TL; + const DataLayout *DL; + ScalarEvolution *SE; + + Type *StackPtrTy; + Type *IntPtrTy; + Type *Int32Ty; + Type *Int8Ty; + + Value *UnsafeStackPtr = nullptr; + + /// Unsafe stack alignment. Each stack frame must ensure that the stack is + /// aligned to this value. We need to re-align the unsafe stack if the + /// alignment of any object on the stack exceeds this value. + /// + /// 16 seems like a reasonable upper bound on the alignment of objects that we + /// might expect to appear on the stack on most common targets. + enum { StackAlignment = 16 }; + + /// \brief Build a value representing a pointer to the unsafe stack pointer. + Value *getOrCreateUnsafeStackPtr(IRBuilder<> &IRB, Function &F); + + /// \brief Return the value of the stack canary. + Value *getStackGuard(IRBuilder<> &IRB, Function &F); + + /// \brief Load stack guard from the frame and check if it has changed. + void checkStackGuard(IRBuilder<> &IRB, Function &F, ReturnInst &RI, + AllocaInst *StackGuardSlot, Value *StackGuard); + + /// \brief Find all static allocas, dynamic allocas, return instructions and + /// stack restore points (exception unwind blocks and setjmp calls) in the + /// given function and append them to the respective vectors. + void findInsts(Function &F, SmallVectorImpl<AllocaInst *> &StaticAllocas, + SmallVectorImpl<AllocaInst *> &DynamicAllocas, + SmallVectorImpl<Argument *> &ByValArguments, + SmallVectorImpl<ReturnInst *> &Returns, + SmallVectorImpl<Instruction *> &StackRestorePoints); + + /// \brief Calculate the allocation size of a given alloca. Returns 0 if the + /// size can not be statically determined. + uint64_t getStaticAllocaAllocationSize(const AllocaInst* AI); + + /// \brief Allocate space for all static allocas in \p StaticAllocas, + /// replace allocas with pointers into the unsafe stack and generate code to + /// restore the stack pointer before all return instructions in \p Returns. + /// + /// \returns A pointer to the top of the unsafe stack after all unsafe static + /// allocas are allocated. + Value *moveStaticAllocasToUnsafeStack(IRBuilder<> &IRB, Function &F, + ArrayRef<AllocaInst *> StaticAllocas, + ArrayRef<Argument *> ByValArguments, + ArrayRef<ReturnInst *> Returns, + Instruction *BasePointer, + AllocaInst *StackGuardSlot); + + /// \brief Generate code to restore the stack after all stack restore points + /// in \p StackRestorePoints. + /// + /// \returns A local variable in which to maintain the dynamic top of the + /// unsafe stack if needed. + AllocaInst * + createStackRestorePoints(IRBuilder<> &IRB, Function &F, + ArrayRef<Instruction *> StackRestorePoints, + Value *StaticTop, bool NeedDynamicTop); + + /// \brief Replace all allocas in \p DynamicAllocas with code to allocate + /// space dynamically on the unsafe stack and store the dynamic unsafe stack + /// top to \p DynamicTop if non-null. + void moveDynamicAllocasToUnsafeStack(Function &F, Value *UnsafeStackPtr, + AllocaInst *DynamicTop, + ArrayRef<AllocaInst *> DynamicAllocas); + + bool IsSafeStackAlloca(const Value *AllocaPtr, uint64_t AllocaSize); + + bool IsMemIntrinsicSafe(const MemIntrinsic *MI, const Use &U, + const Value *AllocaPtr, uint64_t AllocaSize); + bool IsAccessSafe(Value *Addr, uint64_t Size, const Value *AllocaPtr, + uint64_t AllocaSize); + +public: + static char ID; // Pass identification, replacement for typeid. + SafeStack(const TargetMachine *TM) + : FunctionPass(ID), TM(TM), TL(nullptr), DL(nullptr) { + initializeSafeStackPass(*PassRegistry::getPassRegistry()); + } + SafeStack() : SafeStack(nullptr) {} + + void getAnalysisUsage(AnalysisUsage &AU) const override { + AU.addRequired<ScalarEvolutionWrapperPass>(); + } + + bool doInitialization(Module &M) override { + DL = &M.getDataLayout(); + + StackPtrTy = Type::getInt8PtrTy(M.getContext()); + IntPtrTy = DL->getIntPtrType(M.getContext()); + Int32Ty = Type::getInt32Ty(M.getContext()); + Int8Ty = Type::getInt8Ty(M.getContext()); + + return false; + } + + bool runOnFunction(Function &F) override; +}; // class SafeStack + +uint64_t SafeStack::getStaticAllocaAllocationSize(const AllocaInst* AI) { + uint64_t Size = DL->getTypeAllocSize(AI->getAllocatedType()); + if (AI->isArrayAllocation()) { + auto C = dyn_cast<ConstantInt>(AI->getArraySize()); + if (!C) + return 0; + Size *= C->getZExtValue(); + } + return Size; +} + +bool SafeStack::IsAccessSafe(Value *Addr, uint64_t AccessSize, + const Value *AllocaPtr, uint64_t AllocaSize) { + AllocaOffsetRewriter Rewriter(*SE, AllocaPtr); + const SCEV *Expr = Rewriter.visit(SE->getSCEV(Addr)); + + uint64_t BitWidth = SE->getTypeSizeInBits(Expr->getType()); + ConstantRange AccessStartRange = SE->getUnsignedRange(Expr); + ConstantRange SizeRange = + ConstantRange(APInt(BitWidth, 0), APInt(BitWidth, AccessSize)); + ConstantRange AccessRange = AccessStartRange.add(SizeRange); + ConstantRange AllocaRange = + ConstantRange(APInt(BitWidth, 0), APInt(BitWidth, AllocaSize)); + bool Safe = AllocaRange.contains(AccessRange); + + DEBUG(dbgs() << "[SafeStack] " + << (isa<AllocaInst>(AllocaPtr) ? "Alloca " : "ByValArgument ") + << *AllocaPtr << "\n" + << " Access " << *Addr << "\n" + << " SCEV " << *Expr + << " U: " << SE->getUnsignedRange(Expr) + << ", S: " << SE->getSignedRange(Expr) << "\n" + << " Range " << AccessRange << "\n" + << " AllocaRange " << AllocaRange << "\n" + << " " << (Safe ? "safe" : "unsafe") << "\n"); + + return Safe; +} + +bool SafeStack::IsMemIntrinsicSafe(const MemIntrinsic *MI, const Use &U, + const Value *AllocaPtr, + uint64_t AllocaSize) { + // All MemIntrinsics have destination address in Arg0 and size in Arg2. + if (MI->getRawDest() != U) return true; + const auto *Len = dyn_cast<ConstantInt>(MI->getLength()); + // Non-constant size => unsafe. FIXME: try SCEV getRange. + if (!Len) return false; + return IsAccessSafe(U, Len->getZExtValue(), AllocaPtr, AllocaSize); +} + +/// Check whether a given allocation must be put on the safe +/// stack or not. The function analyzes all uses of AI and checks whether it is +/// only accessed in a memory safe way (as decided statically). +bool SafeStack::IsSafeStackAlloca(const Value *AllocaPtr, uint64_t AllocaSize) { + // Go through all uses of this alloca and check whether all accesses to the + // allocated object are statically known to be memory safe and, hence, the + // object can be placed on the safe stack. + SmallPtrSet<const Value *, 16> Visited; + SmallVector<const Value *, 8> WorkList; + WorkList.push_back(AllocaPtr); + + // A DFS search through all uses of the alloca in bitcasts/PHI/GEPs/etc. + while (!WorkList.empty()) { + const Value *V = WorkList.pop_back_val(); + for (const Use &UI : V->uses()) { + auto I = cast<const Instruction>(UI.getUser()); + assert(V == UI.get()); + + switch (I->getOpcode()) { + case Instruction::Load: { + if (!IsAccessSafe(UI, DL->getTypeStoreSize(I->getType()), AllocaPtr, + AllocaSize)) + return false; + break; + } + case Instruction::VAArg: + // "va-arg" from a pointer is safe. + break; + case Instruction::Store: { + if (V == I->getOperand(0)) { + // Stored the pointer - conservatively assume it may be unsafe. + DEBUG(dbgs() << "[SafeStack] Unsafe alloca: " << *AllocaPtr + << "\n store of address: " << *I << "\n"); + return false; + } + + if (!IsAccessSafe(UI, DL->getTypeStoreSize(I->getOperand(0)->getType()), + AllocaPtr, AllocaSize)) + return false; + break; + } + case Instruction::Ret: { + // Information leak. + return false; + } + + case Instruction::Call: + case Instruction::Invoke: { + ImmutableCallSite CS(I); + + if (const IntrinsicInst *II = dyn_cast<IntrinsicInst>(I)) { + if (II->getIntrinsicID() == Intrinsic::lifetime_start || + II->getIntrinsicID() == Intrinsic::lifetime_end) + continue; + } + + if (const MemIntrinsic *MI = dyn_cast<MemIntrinsic>(I)) { + if (!IsMemIntrinsicSafe(MI, UI, AllocaPtr, AllocaSize)) { + DEBUG(dbgs() << "[SafeStack] Unsafe alloca: " << *AllocaPtr + << "\n unsafe memintrinsic: " << *I + << "\n"); + return false; + } + continue; + } + + // LLVM 'nocapture' attribute is only set for arguments whose address + // is not stored, passed around, or used in any other non-trivial way. + // We assume that passing a pointer to an object as a 'nocapture + // readnone' argument is safe. + // FIXME: a more precise solution would require an interprocedural + // analysis here, which would look at all uses of an argument inside + // the function being called. + ImmutableCallSite::arg_iterator B = CS.arg_begin(), E = CS.arg_end(); + for (ImmutableCallSite::arg_iterator A = B; A != E; ++A) + if (A->get() == V) + if (!(CS.doesNotCapture(A - B) && (CS.doesNotAccessMemory(A - B) || + CS.doesNotAccessMemory()))) { + DEBUG(dbgs() << "[SafeStack] Unsafe alloca: " << *AllocaPtr + << "\n unsafe call: " << *I << "\n"); + return false; + } + continue; + } + + default: + if (Visited.insert(I).second) + WorkList.push_back(cast<const Instruction>(I)); + } + } + } + + // All uses of the alloca are safe, we can place it on the safe stack. + return true; +} + +Value *SafeStack::getOrCreateUnsafeStackPtr(IRBuilder<> &IRB, Function &F) { + // Check if there is a target-specific location for the unsafe stack pointer. + if (TL) + if (Value *V = TL->getSafeStackPointerLocation(IRB)) + return V; + + // Otherwise, assume the target links with compiler-rt, which provides a + // thread-local variable with a magic name. + Module &M = *F.getParent(); + const char *UnsafeStackPtrVar = "__safestack_unsafe_stack_ptr"; + auto UnsafeStackPtr = + dyn_cast_or_null<GlobalVariable>(M.getNamedValue(UnsafeStackPtrVar)); + + bool UseTLS = USPStorage == ThreadLocalUSP; + + if (!UnsafeStackPtr) { + auto TLSModel = UseTLS ? + GlobalValue::InitialExecTLSModel : + GlobalValue::NotThreadLocal; + // The global variable is not defined yet, define it ourselves. + // We use the initial-exec TLS model because we do not support the + // variable living anywhere other than in the main executable. + UnsafeStackPtr = new GlobalVariable( + M, StackPtrTy, false, GlobalValue::ExternalLinkage, nullptr, + UnsafeStackPtrVar, nullptr, TLSModel); + } else { + // The variable exists, check its type and attributes. + if (UnsafeStackPtr->getValueType() != StackPtrTy) + report_fatal_error(Twine(UnsafeStackPtrVar) + " must have void* type"); + if (UseTLS != UnsafeStackPtr->isThreadLocal()) + report_fatal_error(Twine(UnsafeStackPtrVar) + " must " + + (UseTLS ? "" : "not ") + "be thread-local"); + } + return UnsafeStackPtr; +} + +Value *SafeStack::getStackGuard(IRBuilder<> &IRB, Function &F) { + Value *StackGuardVar = nullptr; + if (TL) + StackGuardVar = TL->getIRStackGuard(IRB); + if (!StackGuardVar) + StackGuardVar = + F.getParent()->getOrInsertGlobal("__stack_chk_guard", StackPtrTy); + return IRB.CreateLoad(StackGuardVar, "StackGuard"); +} + +void SafeStack::findInsts(Function &F, + SmallVectorImpl<AllocaInst *> &StaticAllocas, + SmallVectorImpl<AllocaInst *> &DynamicAllocas, + SmallVectorImpl<Argument *> &ByValArguments, + SmallVectorImpl<ReturnInst *> &Returns, + SmallVectorImpl<Instruction *> &StackRestorePoints) { + for (Instruction &I : instructions(&F)) { + if (auto AI = dyn_cast<AllocaInst>(&I)) { + ++NumAllocas; + + uint64_t Size = getStaticAllocaAllocationSize(AI); + if (IsSafeStackAlloca(AI, Size)) + continue; + + if (AI->isStaticAlloca()) { + ++NumUnsafeStaticAllocas; + StaticAllocas.push_back(AI); + } else { + ++NumUnsafeDynamicAllocas; + DynamicAllocas.push_back(AI); + } + } else if (auto RI = dyn_cast<ReturnInst>(&I)) { + Returns.push_back(RI); + } else if (auto CI = dyn_cast<CallInst>(&I)) { + // setjmps require stack restore. + if (CI->getCalledFunction() && CI->canReturnTwice()) + StackRestorePoints.push_back(CI); + } else if (auto LP = dyn_cast<LandingPadInst>(&I)) { + // Exception landing pads require stack restore. + StackRestorePoints.push_back(LP); + } else if (auto II = dyn_cast<IntrinsicInst>(&I)) { + if (II->getIntrinsicID() == Intrinsic::gcroot) + llvm::report_fatal_error( + "gcroot intrinsic not compatible with safestack attribute"); + } + } + for (Argument &Arg : F.args()) { + if (!Arg.hasByValAttr()) + continue; + uint64_t Size = + DL->getTypeStoreSize(Arg.getType()->getPointerElementType()); + if (IsSafeStackAlloca(&Arg, Size)) + continue; + + ++NumUnsafeByValArguments; + ByValArguments.push_back(&Arg); + } +} + +AllocaInst * +SafeStack::createStackRestorePoints(IRBuilder<> &IRB, Function &F, + ArrayRef<Instruction *> StackRestorePoints, + Value *StaticTop, bool NeedDynamicTop) { + assert(StaticTop && "The stack top isn't set."); + + if (StackRestorePoints.empty()) + return nullptr; + + // We need the current value of the shadow stack pointer to restore + // after longjmp or exception catching. + + // FIXME: On some platforms this could be handled by the longjmp/exception + // runtime itself. + + AllocaInst *DynamicTop = nullptr; + if (NeedDynamicTop) { + // If we also have dynamic alloca's, the stack pointer value changes + // throughout the function. For now we store it in an alloca. + DynamicTop = IRB.CreateAlloca(StackPtrTy, /*ArraySize=*/nullptr, + "unsafe_stack_dynamic_ptr"); + IRB.CreateStore(StaticTop, DynamicTop); + } + + // Restore current stack pointer after longjmp/exception catch. + for (Instruction *I : StackRestorePoints) { + ++NumUnsafeStackRestorePoints; + + IRB.SetInsertPoint(I->getNextNode()); + Value *CurrentTop = DynamicTop ? IRB.CreateLoad(DynamicTop) : StaticTop; + IRB.CreateStore(CurrentTop, UnsafeStackPtr); + } + + return DynamicTop; +} + +void SafeStack::checkStackGuard(IRBuilder<> &IRB, Function &F, ReturnInst &RI, + AllocaInst *StackGuardSlot, Value *StackGuard) { + Value *V = IRB.CreateLoad(StackGuardSlot); + Value *Cmp = IRB.CreateICmpNE(StackGuard, V); + + auto SuccessProb = BranchProbabilityInfo::getBranchProbStackProtector(true); + auto FailureProb = BranchProbabilityInfo::getBranchProbStackProtector(false); + MDNode *Weights = MDBuilder(F.getContext()) + .createBranchWeights(SuccessProb.getNumerator(), + FailureProb.getNumerator()); + Instruction *CheckTerm = + SplitBlockAndInsertIfThen(Cmp, &RI, + /* Unreachable */ true, Weights); + IRBuilder<> IRBFail(CheckTerm); + // FIXME: respect -fsanitize-trap / -ftrap-function here? + Constant *StackChkFail = F.getParent()->getOrInsertFunction( + "__stack_chk_fail", IRB.getVoidTy(), nullptr); + IRBFail.CreateCall(StackChkFail, {}); +} + +/// We explicitly compute and set the unsafe stack layout for all unsafe +/// static alloca instructions. We save the unsafe "base pointer" in the +/// prologue into a local variable and restore it in the epilogue. +Value *SafeStack::moveStaticAllocasToUnsafeStack( + IRBuilder<> &IRB, Function &F, ArrayRef<AllocaInst *> StaticAllocas, + ArrayRef<Argument *> ByValArguments, ArrayRef<ReturnInst *> Returns, + Instruction *BasePointer, AllocaInst *StackGuardSlot) { + if (StaticAllocas.empty() && ByValArguments.empty()) + return BasePointer; + + DIBuilder DIB(*F.getParent()); + + StackColoring SSC(F, StaticAllocas); + SSC.run(); + SSC.removeAllMarkers(); + + // Unsafe stack always grows down. + StackLayout SSL(StackAlignment); + if (StackGuardSlot) { + Type *Ty = StackGuardSlot->getAllocatedType(); + unsigned Align = + std::max(DL->getPrefTypeAlignment(Ty), StackGuardSlot->getAlignment()); + SSL.addObject(StackGuardSlot, getStaticAllocaAllocationSize(StackGuardSlot), + Align, SSC.getFullLiveRange()); + } + + for (Argument *Arg : ByValArguments) { + Type *Ty = Arg->getType()->getPointerElementType(); + uint64_t Size = DL->getTypeStoreSize(Ty); + if (Size == 0) + Size = 1; // Don't create zero-sized stack objects. + + // Ensure the object is properly aligned. + unsigned Align = std::max((unsigned)DL->getPrefTypeAlignment(Ty), + Arg->getParamAlignment()); + SSL.addObject(Arg, Size, Align, SSC.getFullLiveRange()); + } + + for (AllocaInst *AI : StaticAllocas) { + Type *Ty = AI->getAllocatedType(); + uint64_t Size = getStaticAllocaAllocationSize(AI); + if (Size == 0) + Size = 1; // Don't create zero-sized stack objects. + + // Ensure the object is properly aligned. + unsigned Align = + std::max((unsigned)DL->getPrefTypeAlignment(Ty), AI->getAlignment()); + + SSL.addObject(AI, Size, Align, SSC.getLiveRange(AI)); + } + + SSL.computeLayout(); + unsigned FrameAlignment = SSL.getFrameAlignment(); + + // FIXME: tell SSL that we start at a less-then-MaxAlignment aligned location + // (AlignmentSkew). + if (FrameAlignment > StackAlignment) { + // Re-align the base pointer according to the max requested alignment. + assert(isPowerOf2_32(FrameAlignment)); + IRB.SetInsertPoint(BasePointer->getNextNode()); + BasePointer = cast<Instruction>(IRB.CreateIntToPtr( + IRB.CreateAnd(IRB.CreatePtrToInt(BasePointer, IntPtrTy), + ConstantInt::get(IntPtrTy, ~uint64_t(FrameAlignment - 1))), + StackPtrTy)); + } + + IRB.SetInsertPoint(BasePointer->getNextNode()); + + if (StackGuardSlot) { + unsigned Offset = SSL.getObjectOffset(StackGuardSlot); + Value *Off = IRB.CreateGEP(BasePointer, // BasePointer is i8* + ConstantInt::get(Int32Ty, -Offset)); + Value *NewAI = + IRB.CreateBitCast(Off, StackGuardSlot->getType(), "StackGuardSlot"); + + // Replace alloc with the new location. + StackGuardSlot->replaceAllUsesWith(NewAI); + StackGuardSlot->eraseFromParent(); + } + + for (Argument *Arg : ByValArguments) { + unsigned Offset = SSL.getObjectOffset(Arg); + Type *Ty = Arg->getType()->getPointerElementType(); + + uint64_t Size = DL->getTypeStoreSize(Ty); + if (Size == 0) + Size = 1; // Don't create zero-sized stack objects. + + Value *Off = IRB.CreateGEP(BasePointer, // BasePointer is i8* + ConstantInt::get(Int32Ty, -Offset)); + Value *NewArg = IRB.CreateBitCast(Off, Arg->getType(), + Arg->getName() + ".unsafe-byval"); + + // Replace alloc with the new location. + replaceDbgDeclare(Arg, BasePointer, BasePointer->getNextNode(), DIB, + /*Deref=*/true, -Offset); + Arg->replaceAllUsesWith(NewArg); + IRB.SetInsertPoint(cast<Instruction>(NewArg)->getNextNode()); + IRB.CreateMemCpy(Off, Arg, Size, Arg->getParamAlignment()); + } + + // Allocate space for every unsafe static AllocaInst on the unsafe stack. + for (AllocaInst *AI : StaticAllocas) { + IRB.SetInsertPoint(AI); + unsigned Offset = SSL.getObjectOffset(AI); + + uint64_t Size = getStaticAllocaAllocationSize(AI); + if (Size == 0) + Size = 1; // Don't create zero-sized stack objects. + + replaceDbgDeclareForAlloca(AI, BasePointer, DIB, /*Deref=*/true, -Offset); + replaceDbgValueForAlloca(AI, BasePointer, DIB, -Offset); + + // Replace uses of the alloca with the new location. + // Insert address calculation close to each use to work around PR27844. + std::string Name = std::string(AI->getName()) + ".unsafe"; + while (!AI->use_empty()) { + Use &U = *AI->use_begin(); + Instruction *User = cast<Instruction>(U.getUser()); + + Instruction *InsertBefore; + if (auto *PHI = dyn_cast<PHINode>(User)) + InsertBefore = PHI->getIncomingBlock(U)->getTerminator(); + else + InsertBefore = User; + + IRBuilder<> IRBUser(InsertBefore); + Value *Off = IRBUser.CreateGEP(BasePointer, // BasePointer is i8* + ConstantInt::get(Int32Ty, -Offset)); + Value *Replacement = IRBUser.CreateBitCast(Off, AI->getType(), Name); + + if (auto *PHI = dyn_cast<PHINode>(User)) { + // PHI nodes may have multiple incoming edges from the same BB (why??), + // all must be updated at once with the same incoming value. + auto *BB = PHI->getIncomingBlock(U); + for (unsigned I = 0; I < PHI->getNumIncomingValues(); ++I) + if (PHI->getIncomingBlock(I) == BB) + PHI->setIncomingValue(I, Replacement); + } else { + U.set(Replacement); + } + } + + AI->eraseFromParent(); + } + + // Re-align BasePointer so that our callees would see it aligned as + // expected. + // FIXME: no need to update BasePointer in leaf functions. + unsigned FrameSize = alignTo(SSL.getFrameSize(), StackAlignment); + + // Update shadow stack pointer in the function epilogue. + IRB.SetInsertPoint(BasePointer->getNextNode()); + + Value *StaticTop = + IRB.CreateGEP(BasePointer, ConstantInt::get(Int32Ty, -FrameSize), + "unsafe_stack_static_top"); + IRB.CreateStore(StaticTop, UnsafeStackPtr); + return StaticTop; +} + +void SafeStack::moveDynamicAllocasToUnsafeStack( + Function &F, Value *UnsafeStackPtr, AllocaInst *DynamicTop, + ArrayRef<AllocaInst *> DynamicAllocas) { + DIBuilder DIB(*F.getParent()); + + for (AllocaInst *AI : DynamicAllocas) { + IRBuilder<> IRB(AI); + + // Compute the new SP value (after AI). + Value *ArraySize = AI->getArraySize(); + if (ArraySize->getType() != IntPtrTy) + ArraySize = IRB.CreateIntCast(ArraySize, IntPtrTy, false); + + Type *Ty = AI->getAllocatedType(); + uint64_t TySize = DL->getTypeAllocSize(Ty); + Value *Size = IRB.CreateMul(ArraySize, ConstantInt::get(IntPtrTy, TySize)); + + Value *SP = IRB.CreatePtrToInt(IRB.CreateLoad(UnsafeStackPtr), IntPtrTy); + SP = IRB.CreateSub(SP, Size); + + // Align the SP value to satisfy the AllocaInst, type and stack alignments. + unsigned Align = std::max( + std::max((unsigned)DL->getPrefTypeAlignment(Ty), AI->getAlignment()), + (unsigned)StackAlignment); + + assert(isPowerOf2_32(Align)); + Value *NewTop = IRB.CreateIntToPtr( + IRB.CreateAnd(SP, ConstantInt::get(IntPtrTy, ~uint64_t(Align - 1))), + StackPtrTy); + + // Save the stack pointer. + IRB.CreateStore(NewTop, UnsafeStackPtr); + if (DynamicTop) + IRB.CreateStore(NewTop, DynamicTop); + + Value *NewAI = IRB.CreatePointerCast(NewTop, AI->getType()); + if (AI->hasName() && isa<Instruction>(NewAI)) + NewAI->takeName(AI); + + replaceDbgDeclareForAlloca(AI, NewAI, DIB, /*Deref=*/true); + AI->replaceAllUsesWith(NewAI); + AI->eraseFromParent(); + } + + if (!DynamicAllocas.empty()) { + // Now go through the instructions again, replacing stacksave/stackrestore. + for (inst_iterator It = inst_begin(&F), Ie = inst_end(&F); It != Ie;) { + Instruction *I = &*(It++); + auto II = dyn_cast<IntrinsicInst>(I); + if (!II) + continue; + + if (II->getIntrinsicID() == Intrinsic::stacksave) { + IRBuilder<> IRB(II); + Instruction *LI = IRB.CreateLoad(UnsafeStackPtr); + LI->takeName(II); + II->replaceAllUsesWith(LI); + II->eraseFromParent(); + } else if (II->getIntrinsicID() == Intrinsic::stackrestore) { + IRBuilder<> IRB(II); + Instruction *SI = IRB.CreateStore(II->getArgOperand(0), UnsafeStackPtr); + SI->takeName(II); + assert(II->use_empty()); + II->eraseFromParent(); + } + } + } +} + +bool SafeStack::runOnFunction(Function &F) { + DEBUG(dbgs() << "[SafeStack] Function: " << F.getName() << "\n"); + + if (!F.hasFnAttribute(Attribute::SafeStack)) { + DEBUG(dbgs() << "[SafeStack] safestack is not requested" + " for this function\n"); + return false; + } + + if (F.isDeclaration()) { + DEBUG(dbgs() << "[SafeStack] function definition" + " is not available\n"); + return false; + } + + TL = TM ? TM->getSubtargetImpl(F)->getTargetLowering() : nullptr; + SE = &getAnalysis<ScalarEvolutionWrapperPass>().getSE(); + + ++NumFunctions; + + SmallVector<AllocaInst *, 16> StaticAllocas; + SmallVector<AllocaInst *, 4> DynamicAllocas; + SmallVector<Argument *, 4> ByValArguments; + SmallVector<ReturnInst *, 4> Returns; + + // Collect all points where stack gets unwound and needs to be restored + // This is only necessary because the runtime (setjmp and unwind code) is + // not aware of the unsafe stack and won't unwind/restore it prorerly. + // To work around this problem without changing the runtime, we insert + // instrumentation to restore the unsafe stack pointer when necessary. + SmallVector<Instruction *, 4> StackRestorePoints; + + // Find all static and dynamic alloca instructions that must be moved to the + // unsafe stack, all return instructions and stack restore points. + findInsts(F, StaticAllocas, DynamicAllocas, ByValArguments, Returns, + StackRestorePoints); + + if (StaticAllocas.empty() && DynamicAllocas.empty() && + ByValArguments.empty() && StackRestorePoints.empty()) + return false; // Nothing to do in this function. + + if (!StaticAllocas.empty() || !DynamicAllocas.empty() || + !ByValArguments.empty()) + ++NumUnsafeStackFunctions; // This function has the unsafe stack. + + if (!StackRestorePoints.empty()) + ++NumUnsafeStackRestorePointsFunctions; + + IRBuilder<> IRB(&F.front(), F.begin()->getFirstInsertionPt()); + UnsafeStackPtr = getOrCreateUnsafeStackPtr(IRB, F); + + // Load the current stack pointer (we'll also use it as a base pointer). + // FIXME: use a dedicated register for it ? + Instruction *BasePointer = + IRB.CreateLoad(UnsafeStackPtr, false, "unsafe_stack_ptr"); + assert(BasePointer->getType() == StackPtrTy); + + AllocaInst *StackGuardSlot = nullptr; + // FIXME: implement weaker forms of stack protector. + if (F.hasFnAttribute(Attribute::StackProtect) || + F.hasFnAttribute(Attribute::StackProtectStrong) || + F.hasFnAttribute(Attribute::StackProtectReq)) { + Value *StackGuard = getStackGuard(IRB, F); + StackGuardSlot = IRB.CreateAlloca(StackPtrTy, nullptr); + IRB.CreateStore(StackGuard, StackGuardSlot); + + for (ReturnInst *RI : Returns) { + IRBuilder<> IRBRet(RI); + checkStackGuard(IRBRet, F, *RI, StackGuardSlot, StackGuard); + } + } + + // The top of the unsafe stack after all unsafe static allocas are + // allocated. + Value *StaticTop = + moveStaticAllocasToUnsafeStack(IRB, F, StaticAllocas, ByValArguments, + Returns, BasePointer, StackGuardSlot); + + // Safe stack object that stores the current unsafe stack top. It is updated + // as unsafe dynamic (non-constant-sized) allocas are allocated and freed. + // This is only needed if we need to restore stack pointer after longjmp + // or exceptions, and we have dynamic allocations. + // FIXME: a better alternative might be to store the unsafe stack pointer + // before setjmp / invoke instructions. + AllocaInst *DynamicTop = createStackRestorePoints( + IRB, F, StackRestorePoints, StaticTop, !DynamicAllocas.empty()); + + // Handle dynamic allocas. + moveDynamicAllocasToUnsafeStack(F, UnsafeStackPtr, DynamicTop, + DynamicAllocas); + + // Restore the unsafe stack pointer before each return. + for (ReturnInst *RI : Returns) { + IRB.SetInsertPoint(RI); + IRB.CreateStore(BasePointer, UnsafeStackPtr); + } + + DEBUG(dbgs() << "[SafeStack] safestack applied\n"); + return true; +} + +} // anonymous namespace + +char SafeStack::ID = 0; +INITIALIZE_TM_PASS_BEGIN(SafeStack, "safe-stack", + "Safe Stack instrumentation pass", false, false) +INITIALIZE_TM_PASS_END(SafeStack, "safe-stack", + "Safe Stack instrumentation pass", false, false) + +FunctionPass *llvm::createSafeStackPass(const llvm::TargetMachine *TM) { + return new SafeStack(TM); +} |