diff options
author | dim <dim@FreeBSD.org> | 2017-04-02 17:24:58 +0000 |
---|---|---|
committer | dim <dim@FreeBSD.org> | 2017-04-02 17:24:58 +0000 |
commit | 60b571e49a90d38697b3aca23020d9da42fc7d7f (patch) | |
tree | 99351324c24d6cb146b6285b6caffa4d26fce188 /contrib/llvm/lib/Transforms/Scalar/GVN.cpp | |
parent | bea1b22c7a9bce1dfdd73e6e5b65bc4752215180 (diff) | |
download | FreeBSD-src-60b571e49a90d38697b3aca23020d9da42fc7d7f.zip FreeBSD-src-60b571e49a90d38697b3aca23020d9da42fc7d7f.tar.gz |
Update clang, llvm, lld, lldb, compiler-rt and libc++ to 4.0.0 release:
MFC r309142 (by emaste):
Add WITH_LLD_AS_LD build knob
If set it installs LLD as /usr/bin/ld. LLD (as of version 3.9) is not
capable of linking the world and kernel, but can self-host and link many
substantial applications. GNU ld continues to be used for the world and
kernel build, regardless of how this knob is set.
It is on by default for arm64, and off for all other CPU architectures.
Sponsored by: The FreeBSD Foundation
MFC r310840:
Reapply 310775, now it also builds correctly if lldb is disabled:
Move llvm-objdump from CLANG_EXTRAS to installed by default
We currently install three tools from binutils 2.17.50: as, ld, and
objdump. Work is underway to migrate to a permissively-licensed
tool-chain, with one goal being the retirement of binutils 2.17.50.
LLVM's llvm-objdump is intended to be compatible with GNU objdump
although it is currently missing some options and may have formatting
differences. Enable it by default for testing and further investigation.
It may later be changed to install as /usr/bin/objdump, it becomes a
fully viable replacement.
Reviewed by: emaste
Differential Revision: https://reviews.freebsd.org/D8879
MFC r312855 (by emaste):
Rename LLD_AS_LD to LLD_IS_LD, for consistency with CLANG_IS_CC
Reported by: Dan McGregor <dan.mcgregor usask.ca>
MFC r313559 | glebius | 2017-02-10 18:34:48 +0100 (Fri, 10 Feb 2017) | 5 lines
Don't check struct rtentry on FreeBSD, it is an internal kernel structure.
On other systems it may be API structure for SIOCADDRT/SIOCDELRT.
Reviewed by: emaste, dim
MFC r314152 (by jkim):
Remove an assembler flag, which is redundant since r309124. The upstream
took care of it by introducing a macro NO_EXEC_STACK_DIRECTIVE.
http://llvm.org/viewvc/llvm-project?rev=273500&view=rev
Reviewed by: dim
MFC r314564:
Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
4.0.0 (branches/release_40 296509). The release will follow soon.
Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11
support to build; see UPDATING for more information.
Also note that as of 4.0.0, lld should be able to link the base system
on amd64 and aarch64. See the WITH_LLD_IS_LLD setting in src.conf(5).
Though please be aware that this is work in progress.
Release notes for llvm, clang and lld will be available here:
<http://releases.llvm.org/4.0.0/docs/ReleaseNotes.html>
<http://releases.llvm.org/4.0.0/tools/clang/docs/ReleaseNotes.html>
<http://releases.llvm.org/4.0.0/tools/lld/docs/ReleaseNotes.html>
Thanks to Ed Maste, Jan Beich, Antoine Brodin and Eric Fiselier for
their help.
Relnotes: yes
Exp-run: antoine
PR: 215969, 216008
MFC r314708:
For now, revert r287232 from upstream llvm trunk (by Daniil Fukalov):
[SCEV] limit recursion depth of CompareSCEVComplexity
Summary:
CompareSCEVComplexity goes too deep (50+ on a quite a big unrolled
loop) and runs almost infinite time.
Added cache of "equal" SCEV pairs to earlier cutoff of further
estimation. Recursion depth limit was also introduced as a parameter.
Reviewers: sanjoy
Subscribers: mzolotukhin, tstellarAMD, llvm-commits
Differential Revision: https://reviews.llvm.org/D26389
This commit is the cause of excessive compile times on skein_block.c
(and possibly other files) during kernel builds on amd64.
We never saw the problematic behavior described in this upstream commit,
so for now it is better to revert it. An upstream bug has been filed
here: https://bugs.llvm.org/show_bug.cgi?id=32142
Reported by: mjg
MFC r314795:
Reapply r287232 from upstream llvm trunk (by Daniil Fukalov):
[SCEV] limit recursion depth of CompareSCEVComplexity
Summary:
CompareSCEVComplexity goes too deep (50+ on a quite a big unrolled
loop) and runs almost infinite time.
Added cache of "equal" SCEV pairs to earlier cutoff of further
estimation. Recursion depth limit was also introduced as a parameter.
Reviewers: sanjoy
Subscribers: mzolotukhin, tstellarAMD, llvm-commits
Differential Revision: https://reviews.llvm.org/D26389
Pull in r296992 from upstream llvm trunk (by Sanjoy Das):
[SCEV] Decrease the recursion threshold for CompareValueComplexity
Fixes PR32142.
r287232 accidentally increased the recursion threshold for
CompareValueComplexity from 2 to 32. This change reverses that
change by introducing a separate flag for CompareValueComplexity's
threshold.
The latter revision fixes the excessive compile times for skein_block.c.
MFC r314907 | mmel | 2017-03-08 12:40:27 +0100 (Wed, 08 Mar 2017) | 7 lines
Unbreak ARMv6 world.
The new compiler_rt library imported with clang 4.0.0 have several fatal
issues (non-functional __udivsi3 for example) with ARM specific instrict
functions. As temporary workaround, until upstream solve these problems,
disable all thumb[1][2] related feature.
MFC r315016:
Update clang, llvm, lld, lldb, compiler-rt and libc++ to 4.0.0 release.
We were already very close to the last release candidate, so this is a
pretty minor update.
Relnotes: yes
MFC r316005:
Revert r314907, and pull in r298713 from upstream compiler-rt trunk (by
Weiming Zhao):
builtins: Select correct code fragments when compiling for Thumb1/Thum2/ARM ISA.
Summary:
Value of __ARM_ARCH_ISA_THUMB isn't based on the actual compilation
mode (-mthumb, -marm), it reflect's capability of given CPU.
Due to this:
- use __tbumb__ and __thumb2__ insteand of __ARM_ARCH_ISA_THUMB
- use '.thumb' directive consistently in all affected files
- decorate all thumb functions using
DEFINE_COMPILERRT_THUMB_FUNCTION()
---------
Note: This patch doesn't fix broken Thumb1 variant of __udivsi3 !
Reviewers: weimingz, rengolin, compnerd
Subscribers: aemerson, dim
Differential Revision: https://reviews.llvm.org/D30938
Discussed with: mmel
Diffstat (limited to 'contrib/llvm/lib/Transforms/Scalar/GVN.cpp')
-rw-r--r-- | contrib/llvm/lib/Transforms/Scalar/GVN.cpp | 153 |
1 files changed, 104 insertions, 49 deletions
diff --git a/contrib/llvm/lib/Transforms/Scalar/GVN.cpp b/contrib/llvm/lib/Transforms/Scalar/GVN.cpp index a35a106..0137378 100644 --- a/contrib/llvm/lib/Transforms/Scalar/GVN.cpp +++ b/contrib/llvm/lib/Transforms/Scalar/GVN.cpp @@ -33,6 +33,7 @@ #include "llvm/Analysis/Loads.h" #include "llvm/Analysis/MemoryBuiltins.h" #include "llvm/Analysis/MemoryDependenceAnalysis.h" +#include "llvm/Analysis/OptimizationDiagnosticInfo.h" #include "llvm/Analysis/PHITransAddr.h" #include "llvm/Analysis/TargetLibraryInfo.h" #include "llvm/Analysis/ValueTracking.h" @@ -338,16 +339,9 @@ GVN::Expression GVN::ValueTable::createExtractvalueExpr(ExtractValueInst *EI) { //===----------------------------------------------------------------------===// GVN::ValueTable::ValueTable() : nextValueNumber(1) {} -GVN::ValueTable::ValueTable(const ValueTable &Arg) - : valueNumbering(Arg.valueNumbering), - expressionNumbering(Arg.expressionNumbering), AA(Arg.AA), MD(Arg.MD), - DT(Arg.DT), nextValueNumber(Arg.nextValueNumber) {} -GVN::ValueTable::ValueTable(ValueTable &&Arg) - : valueNumbering(std::move(Arg.valueNumbering)), - expressionNumbering(std::move(Arg.expressionNumbering)), - AA(std::move(Arg.AA)), MD(std::move(Arg.MD)), DT(std::move(Arg.DT)), - nextValueNumber(std::move(Arg.nextValueNumber)) {} -GVN::ValueTable::~ValueTable() {} +GVN::ValueTable::ValueTable(const ValueTable &) = default; +GVN::ValueTable::ValueTable(ValueTable &&) = default; +GVN::ValueTable::~ValueTable() = default; /// add - Insert a value into the table with a specified value number. void GVN::ValueTable::add(Value *V, uint32_t num) { @@ -583,7 +577,7 @@ void GVN::ValueTable::verifyRemoved(const Value *V) const { // GVN Pass //===----------------------------------------------------------------------===// -PreservedAnalyses GVN::run(Function &F, AnalysisManager<Function> &AM) { +PreservedAnalyses GVN::run(Function &F, FunctionAnalysisManager &AM) { // FIXME: The order of evaluation of these 'getResult' calls is very // significant! Re-ordering these variables will cause GVN when run alone to // be less effective! We should fix memdep and basic-aa to not exhibit this @@ -593,7 +587,9 @@ PreservedAnalyses GVN::run(Function &F, AnalysisManager<Function> &AM) { auto &TLI = AM.getResult<TargetLibraryAnalysis>(F); auto &AA = AM.getResult<AAManager>(F); auto &MemDep = AM.getResult<MemoryDependenceAnalysis>(F); - bool Changed = runImpl(F, AC, DT, TLI, AA, &MemDep); + auto *LI = AM.getCachedResult<LoopAnalysis>(F); + auto &ORE = AM.getResult<OptimizationRemarkEmitterAnalysis>(F); + bool Changed = runImpl(F, AC, DT, TLI, AA, &MemDep, LI, &ORE); if (!Changed) return PreservedAnalyses::all(); PreservedAnalyses PA; @@ -725,8 +721,9 @@ static Value *CoerceAvailableValueToLoadType(Value *StoredVal, Type *LoadedTy, assert(CanCoerceMustAliasedValueToLoad(StoredVal, LoadedTy, DL) && "precondition violation - materialization can't fail"); - if (auto *CExpr = dyn_cast<ConstantExpr>(StoredVal)) - StoredVal = ConstantFoldConstantExpression(CExpr, DL); + if (auto *C = dyn_cast<Constant>(StoredVal)) + if (auto *FoldedStoredVal = ConstantFoldConstant(C, DL)) + StoredVal = FoldedStoredVal; // If this is already the right type, just return it. Type *StoredValTy = StoredVal->getType(); @@ -759,8 +756,9 @@ static Value *CoerceAvailableValueToLoadType(Value *StoredVal, Type *LoadedTy, StoredVal = IRB.CreateIntToPtr(StoredVal, LoadedTy); } - if (auto *CExpr = dyn_cast<ConstantExpr>(StoredVal)) - StoredVal = ConstantFoldConstantExpression(CExpr, DL); + if (auto *C = dyn_cast<ConstantExpr>(StoredVal)) + if (auto *FoldedStoredVal = ConstantFoldConstant(C, DL)) + StoredVal = FoldedStoredVal; return StoredVal; } @@ -804,8 +802,9 @@ static Value *CoerceAvailableValueToLoadType(Value *StoredVal, Type *LoadedTy, StoredVal = IRB.CreateBitCast(StoredVal, LoadedTy, "bitcast"); } - if (auto *CExpr = dyn_cast<ConstantExpr>(StoredVal)) - StoredVal = ConstantFoldConstantExpression(CExpr, DL); + if (auto *C = dyn_cast<Constant>(StoredVal)) + if (auto *FoldedStoredVal = ConstantFoldConstant(C, DL)) + StoredVal = FoldedStoredVal; return StoredVal; } @@ -838,16 +837,6 @@ static int AnalyzeLoadFromClobberingWrite(Type *LoadTy, Value *LoadPtr, // a must alias. AA must have gotten confused. // FIXME: Study to see if/when this happens. One case is forwarding a memset // to a load from the base of the memset. -#if 0 - if (LoadOffset == StoreOffset) { - dbgs() << "STORE/LOAD DEP WITH COMMON POINTER MISSED:\n" - << "Base = " << *StoreBase << "\n" - << "Store Ptr = " << *WritePtr << "\n" - << "Store Offs = " << StoreOffset << "\n" - << "Load Ptr = " << *LoadPtr << "\n"; - abort(); - } -#endif // If the load and store don't overlap at all, the store doesn't provide // anything to the load. In this case, they really don't alias at all, AA @@ -856,8 +845,8 @@ static int AnalyzeLoadFromClobberingWrite(Type *LoadTy, Value *LoadPtr, if ((WriteSizeInBits & 7) | (LoadSize & 7)) return -1; - uint64_t StoreSize = WriteSizeInBits >> 3; // Convert to bytes. - LoadSize >>= 3; + uint64_t StoreSize = WriteSizeInBits / 8; // Convert to bytes. + LoadSize /= 8; bool isAAFailure = false; @@ -866,17 +855,8 @@ static int AnalyzeLoadFromClobberingWrite(Type *LoadTy, Value *LoadPtr, else isAAFailure = LoadOffset+int64_t(LoadSize) <= StoreOffset; - if (isAAFailure) { -#if 0 - dbgs() << "STORE LOAD DEP WITH COMMON BASE:\n" - << "Base = " << *StoreBase << "\n" - << "Store Ptr = " << *WritePtr << "\n" - << "Store Offs = " << StoreOffset << "\n" - << "Load Ptr = " << *LoadPtr << "\n"; - abort(); -#endif + if (isAAFailure) return -1; - } // If the Load isn't completely contained within the stored bits, we don't // have all the bits to feed it. We could do something crazy in the future @@ -1229,6 +1209,38 @@ static bool isLifetimeStart(const Instruction *Inst) { return false; } +/// \brief Try to locate the three instruction involved in a missed +/// load-elimination case that is due to an intervening store. +static void reportMayClobberedLoad(LoadInst *LI, MemDepResult DepInfo, + DominatorTree *DT, + OptimizationRemarkEmitter *ORE) { + using namespace ore; + User *OtherAccess = nullptr; + + OptimizationRemarkMissed R(DEBUG_TYPE, "LoadClobbered", LI); + R << "load of type " << NV("Type", LI->getType()) << " not eliminated" + << setExtraArgs(); + + for (auto *U : LI->getPointerOperand()->users()) + if (U != LI && (isa<LoadInst>(U) || isa<StoreInst>(U)) && + DT->dominates(cast<Instruction>(U), LI)) { + // FIXME: for now give up if there are multiple memory accesses that + // dominate the load. We need further analysis to decide which one is + // that we're forwarding from. + if (OtherAccess) + OtherAccess = nullptr; + else + OtherAccess = U; + } + + if (OtherAccess) + R << " in favor of " << NV("OtherAccess", OtherAccess); + + R << " because it is clobbered by " << NV("ClobberedBy", DepInfo.getInst()); + + ORE->emit(R); +} + bool GVN::AnalyzeLoadAvailability(LoadInst *LI, MemDepResult DepInfo, Value *Address, AvailableValue &Res) { @@ -1293,6 +1305,10 @@ bool GVN::AnalyzeLoadAvailability(LoadInst *LI, MemDepResult DepInfo, Instruction *I = DepInfo.getInst(); dbgs() << " is clobbered by " << *I << '\n'; ); + + if (ORE->allowExtraAnalysis()) + reportMayClobberedLoad(LI, DepInfo, DT, ORE); + return false; } assert(DepInfo.isDef() && "follows from above"); @@ -1556,6 +1572,13 @@ bool GVN::PerformLoadPRE(LoadInst *LI, AvailValInBlkVect &ValuesPerBlock, // Assign value numbers to the new instructions. for (Instruction *I : NewInsts) { + // Instructions that have been inserted in predecessor(s) to materialize + // the load address do not retain their original debug locations. Doing + // so could lead to confusing (but correct) source attributions. + // FIXME: How do we retain source locations without causing poor debugging + // behavior? + I->setDebugLoc(DebugLoc()); + // FIXME: We really _ought_ to insert these value numbers into their // parent's availability map. However, in doing so, we risk getting into // ordering issues. If a block hasn't been processed yet, we would be @@ -1585,8 +1608,11 @@ bool GVN::PerformLoadPRE(LoadInst *LI, AvailValInBlkVect &ValuesPerBlock, if (auto *RangeMD = LI->getMetadata(LLVMContext::MD_range)) NewLoad->setMetadata(LLVMContext::MD_range, RangeMD); - // Transfer DebugLoc. - NewLoad->setDebugLoc(LI->getDebugLoc()); + // We do not propagate the old load's debug location, because the new + // load now lives in a different BB, and we want to avoid a jumpy line + // table. + // FIXME: How do we retain source locations without causing poor debugging + // behavior? // Add the newly created load. ValuesPerBlock.push_back(AvailableValueInBlock::get(UnavailablePred, @@ -1605,10 +1631,21 @@ bool GVN::PerformLoadPRE(LoadInst *LI, AvailValInBlkVect &ValuesPerBlock, if (V->getType()->getScalarType()->isPointerTy()) MD->invalidateCachedPointerInfo(V); markInstructionForDeletion(LI); + ORE->emit(OptimizationRemark(DEBUG_TYPE, "LoadPRE", LI) + << "load eliminated by PRE"); ++NumPRELoad; return true; } +static void reportLoadElim(LoadInst *LI, Value *AvailableValue, + OptimizationRemarkEmitter *ORE) { + using namespace ore; + ORE->emit(OptimizationRemark(DEBUG_TYPE, "LoadElim", LI) + << "load of type " << NV("Type", LI->getType()) << " eliminated" + << setExtraArgs() << " in favor of " + << NV("InfavorOfValue", AvailableValue)); +} + /// Attempt to eliminate a load whose dependencies are /// non-local by performing PHI construction. bool GVN::processNonLocalLoad(LoadInst *LI) { @@ -1673,12 +1710,16 @@ bool GVN::processNonLocalLoad(LoadInst *LI) { if (isa<PHINode>(V)) V->takeName(LI); if (Instruction *I = dyn_cast<Instruction>(V)) - if (LI->getDebugLoc()) + // If instruction I has debug info, then we should not update it. + // Also, if I has a null DebugLoc, then it is still potentially incorrect + // to propagate LI's DebugLoc because LI may not post-dominate I. + if (LI->getDebugLoc() && ValuesPerBlock.size() != 1) I->setDebugLoc(LI->getDebugLoc()); if (V->getType()->getScalarType()->isPointerTy()) MD->invalidateCachedPointerInfo(V); markInstructionForDeletion(LI); ++NumGVNLoad; + reportLoadElim(LI, V, ORE); return true; } @@ -1754,7 +1795,12 @@ static void patchReplacementInstruction(Instruction *I, Value *Repl) { // Patch the replacement so that it is not more restrictive than the value // being replaced. - ReplInst->andIRFlags(I); + // Note that if 'I' is a load being replaced by some operation, + // for example, by an arithmetic operation, then andIRFlags() + // would just erase all math flags from the original arithmetic + // operation, which is clearly not wanted and not needed. + if (!isa<LoadInst>(I)) + ReplInst->andIRFlags(I); // FIXME: If both the original and replacement value are part of the // same control-flow region (meaning that the execution of one @@ -1820,6 +1866,7 @@ bool GVN::processLoad(LoadInst *L) { patchAndReplaceAllUsesWith(L, AvailableValue); markInstructionForDeletion(L); ++NumGVNLoad; + reportLoadElim(L, AvailableValue, ORE); // Tell MDA to rexamine the reused pointer since we might have more // information after forwarding it. if (MD && AvailableValue->getType()->getScalarType()->isPointerTy()) @@ -2197,7 +2244,8 @@ bool GVN::processInstruction(Instruction *I) { /// runOnFunction - This is the main transformation entry point for a function. bool GVN::runImpl(Function &F, AssumptionCache &RunAC, DominatorTree &RunDT, const TargetLibraryInfo &RunTLI, AAResults &RunAA, - MemoryDependenceResults *RunMD) { + MemoryDependenceResults *RunMD, LoopInfo *LI, + OptimizationRemarkEmitter *RunORE) { AC = &RunAC; DT = &RunDT; VN.setDomTree(DT); @@ -2205,6 +2253,7 @@ bool GVN::runImpl(Function &F, AssumptionCache &RunAC, DominatorTree &RunDT, VN.setAliasAnalysis(&RunAA); MD = RunMD; VN.setMemDep(MD); + ORE = RunORE; bool Changed = false; bool ShouldContinue = true; @@ -2214,9 +2263,9 @@ bool GVN::runImpl(Function &F, AssumptionCache &RunAC, DominatorTree &RunDT, for (Function::iterator FI = F.begin(), FE = F.end(); FI != FE; ) { BasicBlock *BB = &*FI++; - bool removedBlock = - MergeBlockIntoPredecessor(BB, DT, /* LoopInfo */ nullptr, MD); - if (removedBlock) ++NumGVNBlocks; + bool removedBlock = MergeBlockIntoPredecessor(BB, DT, LI, MD); + if (removedBlock) + ++NumGVNBlocks; Changed |= removedBlock; } @@ -2711,13 +2760,17 @@ public: if (skipFunction(F)) return false; + auto *LIWP = getAnalysisIfAvailable<LoopInfoWrapperPass>(); + return Impl.runImpl( F, getAnalysis<AssumptionCacheTracker>().getAssumptionCache(F), getAnalysis<DominatorTreeWrapperPass>().getDomTree(), getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(), getAnalysis<AAResultsWrapperPass>().getAAResults(), NoLoads ? nullptr - : &getAnalysis<MemoryDependenceWrapperPass>().getMemDep()); + : &getAnalysis<MemoryDependenceWrapperPass>().getMemDep(), + LIWP ? &LIWP->getLoopInfo() : nullptr, + &getAnalysis<OptimizationRemarkEmitterWrapperPass>().getORE()); } void getAnalysisUsage(AnalysisUsage &AU) const override { @@ -2730,6 +2783,7 @@ public: AU.addPreserved<DominatorTreeWrapperPass>(); AU.addPreserved<GlobalsAAWrapperPass>(); + AU.addRequired<OptimizationRemarkEmitterWrapperPass>(); } private: @@ -2751,4 +2805,5 @@ INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass) INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass) INITIALIZE_PASS_DEPENDENCY(AAResultsWrapperPass) INITIALIZE_PASS_DEPENDENCY(GlobalsAAWrapperPass) +INITIALIZE_PASS_DEPENDENCY(OptimizationRemarkEmitterWrapperPass) INITIALIZE_PASS_END(GVNLegacyPass, "gvn", "Global Value Numbering", false, false) |