diff options
author | dim <dim@FreeBSD.org> | 2014-03-21 17:53:59 +0000 |
---|---|---|
committer | dim <dim@FreeBSD.org> | 2014-03-21 17:53:59 +0000 |
commit | 9cedb8bb69b89b0f0c529937247a6a80cabdbaec (patch) | |
tree | c978f0e9ec1ab92dc8123783f30b08a7fd1e2a39 /contrib/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp | |
parent | 03fdc2934eb61c44c049a02b02aa974cfdd8a0eb (diff) | |
download | FreeBSD-src-9cedb8bb69b89b0f0c529937247a6a80cabdbaec.zip FreeBSD-src-9cedb8bb69b89b0f0c529937247a6a80cabdbaec.tar.gz |
MFC 261991:
Upgrade our copy of llvm/clang to 3.4 release. This version supports
all of the features in the current working draft of the upcoming C++
standard, provisionally named C++1y.
The code generator's performance is greatly increased, and the loop
auto-vectorizer is now enabled at -Os and -O2 in addition to -O3. The
PowerPC backend has made several major improvements to code generation
quality and compile time, and the X86, SPARC, ARM32, Aarch64 and SystemZ
backends have all seen major feature work.
Release notes for llvm and clang can be found here:
<http://llvm.org/releases/3.4/docs/ReleaseNotes.html>
<http://llvm.org/releases/3.4/tools/clang/docs/ReleaseNotes.html>
MFC 262121 (by emaste):
Update lldb for clang/llvm 3.4 import
This commit largely restores the lldb source to the upstream r196259
snapshot with the addition of threaded inferior support and a few bug
fixes.
Specific upstream lldb revisions restored include:
SVN git
181387 779e6ac
181703 7bef4e2
182099 b31044e
182650 f2dcf35
182683 0d91b80
183862 15c1774
183929 99447a6
184177 0b2934b
184948 4dc3761
184954 007e7bc
186990 eebd175
Sponsored by: DARPA, AFRL
MFC 262186 (by emaste):
Fix mismerge in r262121
A break statement was lost in the merge. The error had no functional
impact, but restore it to reduce the diff against upstream.
MFC 262303:
Pull in r197521 from upstream clang trunk (by rdivacky):
Use the integrated assembler by default on FreeBSD/ppc and ppc64.
Requested by: jhibbits
MFC 262611:
Pull in r196874 from upstream llvm trunk:
Fix a crash that occurs when PWD is invalid.
MCJIT needs to be able to run in hostile environments, even when PWD
is invalid. There's no need to crash MCJIT in this case.
The obvious fix is to simply leave MCContext's CompilationDir empty
when PWD can't be determined. This way, MCJIT clients,
and other clients that link with LLVM don't need a valid working directory.
If we do want to guarantee valid CompilationDir, that should be done
only for clients of getCompilationDir(). This is as simple as checking
for an empty string.
The only current use of getCompilationDir is EmitGenDwarfInfo, which
won't conceivably run with an invalid working dir. However, in the
purely hypothetically and untestable case that this happens, the
AT_comp_dir will be omitted from the compilation_unit DIE.
This should help fix assertions occurring with ports-mgmt/tinderbox,
when it is using jails, and sometimes invalidates clang's current
working directory.
Reported by: decke
MFC 262809:
Pull in r203007 from upstream clang trunk:
Don't produce an alias between destructors with different calling conventions.
Fixes pr19007.
(Please note that is an LLVM PR identifier, not a FreeBSD one.)
This should fix Firefox and/or libxul crashes (due to problems with
regparm/stdcall calling conventions) on i386.
Reported by: multiple users on freebsd-current
PR: bin/187103
MFC 263048:
Repair recognition of "CC" as an alias for the C++ compiler, since it
was silently broken by upstream for a Windows-specific use-case.
Apparently some versions of CMake still rely on this archaic feature...
Reported by: rakuco
MFC 263049:
Garbage collect the old way of adding the libstdc++ include directories
in clang's InitHeaderSearch.cpp. This has been superseded by David
Chisnall's commit in r255321.
Moreover, if libc++ is used, the libstdc++ include directories should
not be in the search path at all. These directories are now only used
if you pass -stdlib=libstdc++.
Diffstat (limited to 'contrib/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp')
-rw-r--r-- | contrib/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp | 105 |
1 files changed, 49 insertions, 56 deletions
diff --git a/contrib/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp b/contrib/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp index f0d29c8..007e9b7 100644 --- a/contrib/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp +++ b/contrib/llvm/lib/Transforms/Scalar/CodeGenPrepare.cpp @@ -22,7 +22,6 @@ #include "llvm/Analysis/DominatorInternals.h" #include "llvm/Analysis/Dominators.h" #include "llvm/Analysis/InstructionSimplify.h" -#include "llvm/Analysis/ProfileInfo.h" #include "llvm/Assembly/Writer.h" #include "llvm/IR/Constants.h" #include "llvm/IR/DataLayout.h" @@ -76,10 +75,10 @@ namespace { class CodeGenPrepare : public FunctionPass { /// TLI - Keep a pointer of a TargetLowering to consult for determining /// transformation profitability. + const TargetMachine *TM; const TargetLowering *TLI; const TargetLibraryInfo *TLInfo; DominatorTree *DT; - ProfileInfo *PFI; /// CurInstIterator - As we scan instructions optimizing them, this is the /// next instruction to optimize. Xforms that can invalidate this should @@ -100,8 +99,8 @@ namespace { public: static char ID; // Pass identification, replacement for typeid - explicit CodeGenPrepare(const TargetLowering *tli = 0) - : FunctionPass(ID), TLI(tli) { + explicit CodeGenPrepare(const TargetMachine *TM = 0) + : FunctionPass(ID), TM(TM), TLI(0) { initializeCodeGenPreparePass(*PassRegistry::getPassRegistry()); } bool runOnFunction(Function &F); @@ -110,7 +109,6 @@ namespace { virtual void getAnalysisUsage(AnalysisUsage &AU) const { AU.addPreserved<DominatorTree>(); - AU.addPreserved<ProfileInfo>(); AU.addRequired<TargetLibraryInfo>(); } @@ -139,17 +137,17 @@ INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfo) INITIALIZE_PASS_END(CodeGenPrepare, "codegenprepare", "Optimize for code generation", false, false) -FunctionPass *llvm::createCodeGenPreparePass(const TargetLowering *TLI) { - return new CodeGenPrepare(TLI); +FunctionPass *llvm::createCodeGenPreparePass(const TargetMachine *TM) { + return new CodeGenPrepare(TM); } bool CodeGenPrepare::runOnFunction(Function &F) { bool EverMadeChange = false; ModifiedDT = false; + if (TM) TLI = TM->getTargetLowering(); TLInfo = &getAnalysis<TargetLibraryInfo>(); DT = getAnalysisIfAvailable<DominatorTree>(); - PFI = getAnalysisIfAvailable<ProfileInfo>(); OptSize = F.getAttributes().hasAttribute(AttributeSet::FunctionIndex, Attribute::OptimizeForSize); @@ -205,7 +203,7 @@ bool CodeGenPrepare::runOnFunction(Function &F) { SmallVector<BasicBlock*, 2> Successors(succ_begin(BB), succ_end(BB)); DeleteDeadBlock(BB); - + for (SmallVectorImpl<BasicBlock*>::iterator II = Successors.begin(), IE = Successors.end(); II != IE; ++II) if (pred_begin(*II) == pred_end(*II)) @@ -440,10 +438,6 @@ void CodeGenPrepare::EliminateMostlyEmptyBlock(BasicBlock *BB) { DT->changeImmediateDominator(DestBB, NewIDom); DT->eraseNode(BB); } - if (PFI) { - PFI->replaceAllUses(BB, DestBB); - PFI->removeEdge(ProfileInfo::getEdge(BB, DestBB)); - } BB->eraseFromParent(); ++NumBlocksElim; @@ -830,7 +824,7 @@ struct ExtAddrMode : public TargetLowering::AddrMode { ExtAddrMode() : BaseReg(0), ScaledReg(0) {} void print(raw_ostream &OS) const; void dump() const; - + bool operator==(const ExtAddrMode& O) const { return (BaseReg == O.BaseReg) && (ScaledReg == O.ScaledReg) && (BaseGV == O.BaseGV) && (BaseOffs == O.BaseOffs) && @@ -838,10 +832,12 @@ struct ExtAddrMode : public TargetLowering::AddrMode { } }; +#ifndef NDEBUG static inline raw_ostream &operator<<(raw_ostream &OS, const ExtAddrMode &AM) { AM.print(OS); return OS; } +#endif void ExtAddrMode::print(raw_ostream &OS) const { bool NeedPlus = false; @@ -866,7 +862,6 @@ void ExtAddrMode::print(raw_ostream &OS) const { OS << (NeedPlus ? " + " : "") << Scale << "*"; WriteAsOperand(OS, ScaledReg, /*PrintType=*/false); - NeedPlus = true; } OS << ']'; @@ -891,16 +886,16 @@ class AddressingModeMatcher { /// the memory instruction that we're computing this address for. Type *AccessTy; Instruction *MemoryInst; - + /// AddrMode - This is the addressing mode that we're building up. This is /// part of the return value of this addressing mode matching stuff. ExtAddrMode &AddrMode; - + /// IgnoreProfitability - This is set to true when we should not do /// profitability checks. When true, IsProfitableToFoldIntoAddressingMode /// always returns true. bool IgnoreProfitability; - + AddressingModeMatcher(SmallVectorImpl<Instruction*> &AMI, const TargetLowering &T, Type *AT, Instruction *MI, ExtAddrMode &AM) @@ -908,7 +903,7 @@ class AddressingModeMatcher { IgnoreProfitability = false; } public: - + /// Match - Find the maximal addressing mode that a load/store of V can fold, /// give an access type of AccessTy. This returns a list of involved /// instructions in AddrModeInsts. @@ -918,7 +913,7 @@ public: const TargetLowering &TLI) { ExtAddrMode Result; - bool Success = + bool Success = AddressingModeMatcher(AddrModeInsts, TLI, AccessTy, MemoryInst, Result).MatchAddr(V, 0); (void)Success; assert(Success && "Couldn't select *anything*?"); @@ -943,11 +938,11 @@ bool AddressingModeMatcher::MatchScaledValue(Value *ScaleReg, int64_t Scale, // mode. Just process that directly. if (Scale == 1) return MatchAddr(ScaleReg, Depth); - + // If the scale is 0, it takes nothing to add this. if (Scale == 0) return true; - + // If we already have a scale of this value, we can add to it, otherwise, we // need an available scale field. if (AddrMode.Scale != 0 && AddrMode.ScaledReg != ScaleReg) @@ -966,7 +961,7 @@ bool AddressingModeMatcher::MatchScaledValue(Value *ScaleReg, int64_t Scale, // It was legal, so commit it. AddrMode = TestAddrMode; - + // Okay, we decided that we can add ScaleReg+Scale to AddrMode. Check now // to see if ScaleReg is actually X+C. If so, we can turn this into adding // X*Scale + C*Scale to addr mode. @@ -975,7 +970,7 @@ bool AddressingModeMatcher::MatchScaledValue(Value *ScaleReg, int64_t Scale, match(ScaleReg, m_Add(m_Value(AddLHS), m_ConstantInt(CI)))) { TestAddrMode.ScaledReg = AddLHS; TestAddrMode.BaseOffs += CI->getSExtValue()*TestAddrMode.Scale; - + // If this addressing mode is legal, commit it and remember that we folded // this instruction. if (TLI.isLegalAddressingMode(TestAddrMode, AccessTy)) { @@ -1026,7 +1021,7 @@ bool AddressingModeMatcher::MatchOperationAddr(User *AddrInst, unsigned Opcode, unsigned Depth) { // Avoid exponential behavior on extremely deep expression trees. if (Depth >= 5) return false; - + switch (Opcode) { case Instruction::PtrToInt: // PtrToInt is always a noop, as we know that the int type is pointer sized. @@ -1034,7 +1029,7 @@ bool AddressingModeMatcher::MatchOperationAddr(User *AddrInst, unsigned Opcode, case Instruction::IntToPtr: // This inttoptr is a no-op if the integer type is pointer sized. if (TLI.getValueType(AddrInst->getOperand(0)->getType()) == - TLI.getPointerTy()) + TLI.getPointerTy(AddrInst->getType()->getPointerAddressSpace())) return MatchAddr(AddrInst->getOperand(0), Depth); return false; case Instruction::BitCast: @@ -1055,16 +1050,16 @@ bool AddressingModeMatcher::MatchOperationAddr(User *AddrInst, unsigned Opcode, if (MatchAddr(AddrInst->getOperand(1), Depth+1) && MatchAddr(AddrInst->getOperand(0), Depth+1)) return true; - + // Restore the old addr mode info. AddrMode = BackupAddrMode; AddrModeInsts.resize(OldSize); - + // Otherwise this was over-aggressive. Try merging in the LHS then the RHS. if (MatchAddr(AddrInst->getOperand(0), Depth+1) && MatchAddr(AddrInst->getOperand(1), Depth+1)) return true; - + // Otherwise we definitely can't merge the ADD in. AddrMode = BackupAddrMode; AddrModeInsts.resize(OldSize); @@ -1081,7 +1076,7 @@ bool AddressingModeMatcher::MatchOperationAddr(User *AddrInst, unsigned Opcode, int64_t Scale = RHS->getSExtValue(); if (Opcode == Instruction::Shl) Scale = 1LL << Scale; - + return MatchScaledValue(AddrInst->getOperand(0), Scale, Depth); } case Instruction::GetElementPtr: { @@ -1089,7 +1084,7 @@ bool AddressingModeMatcher::MatchOperationAddr(User *AddrInst, unsigned Opcode, // one variable offset. int VariableOperand = -1; unsigned VariableScale = 0; - + int64_t ConstantOffset = 0; const DataLayout *TD = TLI.getDataLayout(); gep_type_iterator GTI = gep_type_begin(AddrInst); @@ -1107,14 +1102,14 @@ bool AddressingModeMatcher::MatchOperationAddr(User *AddrInst, unsigned Opcode, // We only allow one variable index at the moment. if (VariableOperand != -1) return false; - + // Remember the variable index. VariableOperand = i; VariableScale = TypeSize; } } } - + // A common case is for the GEP to only do a constant offset. In this case, // just add it to the disp field and check validity. if (VariableOperand == -1) { @@ -1208,7 +1203,7 @@ bool AddressingModeMatcher::MatchAddr(Value *Addr, unsigned Depth) { AddrModeInsts.push_back(I); return true; } - + // It isn't profitable to do this, roll back. //cerr << "NOT FOLDING: " << *I; AddrMode = BackupAddrMode; @@ -1254,7 +1249,7 @@ static bool IsOperandAMemoryOperand(CallInst *CI, InlineAsm *IA, Value *OpVal, TargetLowering::AsmOperandInfoVector TargetConstraints = TLI.ParseConstraints(ImmutableCallSite(CI)); for (unsigned i = 0, e = TargetConstraints.size(); i != e; ++i) { TargetLowering::AsmOperandInfo &OpInfo = TargetConstraints[i]; - + // Compute the constraint code and ConstraintType to use. TLI.ComputeConstraintToUse(OpInfo, SDValue()); @@ -1279,7 +1274,7 @@ static bool FindAllMemoryUses(Instruction *I, // If we already considered this instruction, we're done. if (!ConsideredInsts.insert(I)) return false; - + // If this is an obviously unfoldable instruction, bail out. if (!MightBeFoldableInst(I)) return true; @@ -1293,24 +1288,24 @@ static bool FindAllMemoryUses(Instruction *I, MemoryUses.push_back(std::make_pair(LI, UI.getOperandNo())); continue; } - + if (StoreInst *SI = dyn_cast<StoreInst>(U)) { unsigned opNo = UI.getOperandNo(); if (opNo == 0) return true; // Storing addr, not into addr. MemoryUses.push_back(std::make_pair(SI, opNo)); continue; } - + if (CallInst *CI = dyn_cast<CallInst>(U)) { InlineAsm *IA = dyn_cast<InlineAsm>(CI->getCalledValue()); if (!IA) return true; - + // If this is a memory operand, we're cool, otherwise bail out. if (!IsOperandAMemoryOperand(CI, IA, I, TLI)) return true; continue; } - + if (FindAllMemoryUses(cast<Instruction>(U), MemoryUses, ConsideredInsts, TLI)) return true; @@ -1328,17 +1323,17 @@ bool AddressingModeMatcher::ValueAlreadyLiveAtInst(Value *Val,Value *KnownLive1, // If Val is either of the known-live values, we know it is live! if (Val == 0 || Val == KnownLive1 || Val == KnownLive2) return true; - + // All values other than instructions and arguments (e.g. constants) are live. if (!isa<Instruction>(Val) && !isa<Argument>(Val)) return true; - + // If Val is a constant sized alloca in the entry block, it is live, this is // true because it is just a reference to the stack/frame pointer, which is // live for the whole function. if (AllocaInst *AI = dyn_cast<AllocaInst>(Val)) if (AI->isStaticAlloca()) return true; - + // Check to see if this value is already used in the memory instruction's // block. If so, it's already live into the block at the very least, so we // can reasonably fold it. @@ -1370,7 +1365,7 @@ bool AddressingModeMatcher:: IsProfitableToFoldIntoAddressingMode(Instruction *I, ExtAddrMode &AMBefore, ExtAddrMode &AMAfter) { if (IgnoreProfitability) return true; - + // AMBefore is the addressing mode before this instruction was folded into it, // and AMAfter is the addressing mode after the instruction was folded. Get // the set of registers referenced by AMAfter and subtract out those @@ -1381,7 +1376,7 @@ IsProfitableToFoldIntoAddressingMode(Instruction *I, ExtAddrMode &AMBefore, // BaseReg and ScaleReg (global addresses are always available, as are any // folded immediates). Value *BaseReg = AMAfter.BaseReg, *ScaledReg = AMAfter.ScaledReg; - + // If the BaseReg or ScaledReg was referenced by the previous addrmode, their // lifetime wasn't extended by adding this instruction. if (ValueAlreadyLiveAtInst(BaseReg, AMBefore.BaseReg, AMBefore.ScaledReg)) @@ -1402,7 +1397,7 @@ IsProfitableToFoldIntoAddressingMode(Instruction *I, ExtAddrMode &AMBefore, SmallPtrSet<Instruction*, 16> ConsideredInsts; if (FindAllMemoryUses(I, MemoryUses, ConsideredInsts, TLI)) return false; // Has a non-memory, non-foldable use! - + // Now that we know that all uses of this instruction are part of a chain of // computation involving only operations that could theoretically be folded // into a memory use, loop over each of these uses and see if they could @@ -1411,15 +1406,14 @@ IsProfitableToFoldIntoAddressingMode(Instruction *I, ExtAddrMode &AMBefore, for (unsigned i = 0, e = MemoryUses.size(); i != e; ++i) { Instruction *User = MemoryUses[i].first; unsigned OpNo = MemoryUses[i].second; - + // Get the access type of this use. If the use isn't a pointer, we don't // know what it accesses. Value *Address = User->getOperand(OpNo); if (!Address->getType()->isPointerTy()) return false; - Type *AddressAccessTy = - cast<PointerType>(Address->getType())->getElementType(); - + Type *AddressAccessTy = Address->getType()->getPointerElementType(); + // Do a match against the root of this address, ignoring profitability. This // will tell us if the addressing mode for the memory operation will // *actually* cover the shared instruction. @@ -1434,10 +1428,10 @@ IsProfitableToFoldIntoAddressingMode(Instruction *I, ExtAddrMode &AMBefore, if (std::find(MatchedAddrModeInsts.begin(), MatchedAddrModeInsts.end(), I) == MatchedAddrModeInsts.end()) return false; - + MatchedAddrModeInsts.clear(); } - + return true; } @@ -1572,9 +1566,7 @@ bool CodeGenPrepare::OptimizeMemoryInst(Instruction *MemoryInst, Value *Addr, } else { DEBUG(dbgs() << "CGP: SINKING nonlocal addrmode: " << AddrMode << " for " << *MemoryInst); - Type *IntPtrTy = - TLI->getDataLayout()->getIntPtrType(AccessTy->getContext()); - + Type *IntPtrTy = TLI->getDataLayout()->getIntPtrType(Addr->getType()); Value *Result = 0; // Start with the base register. Do this first so that subsequent address @@ -1893,7 +1885,8 @@ bool CodeGenPrepare::OptimizeInst(Instruction *I) { // It is possible for very late stage optimizations (such as SimplifyCFG) // to introduce PHI nodes too late to be cleaned up. If we detect such a // trivial PHI, go ahead and zap it here. - if (Value *V = SimplifyInstruction(P)) { + if (Value *V = SimplifyInstruction(P, TLI ? TLI->getDataLayout() : 0, + TLInfo, DT)) { P->replaceAllUsesWith(V); P->eraseFromParent(); ++NumPHIsElim; |