diff options
author | dim <dim@FreeBSD.org> | 2016-12-26 20:36:37 +0000 |
---|---|---|
committer | dim <dim@FreeBSD.org> | 2016-12-26 20:36:37 +0000 |
commit | 06210ae42d418d50d8d9365d5c9419308ae9e7ee (patch) | |
tree | ab60b4cdd6e430dda1f292a46a77ddb744723f31 /contrib/llvm/lib/Target/AMDGPU/R600ControlFlowFinalizer.cpp | |
parent | 2dd166267f53df1c3748b4325d294b9b839de74b (diff) | |
download | FreeBSD-src-06210ae42d418d50d8d9365d5c9419308ae9e7ee.zip FreeBSD-src-06210ae42d418d50d8d9365d5c9419308ae9e7ee.tar.gz |
MFC r309124:
Upgrade our copies of clang, llvm, lldb, compiler-rt and libc++ to 3.9.0
release, and add lld 3.9.0. Also completely revamp the build system for
clang, llvm, lldb and their related tools.
Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11
support to build; see UPDATING for more information.
Release notes for llvm, clang and lld are available here:
<http://llvm.org/releases/3.9.0/docs/ReleaseNotes.html>
<http://llvm.org/releases/3.9.0/tools/clang/docs/ReleaseNotes.html>
<http://llvm.org/releases/3.9.0/tools/lld/docs/ReleaseNotes.html>
Thanks to Ed Maste, Bryan Drewery, Andrew Turner, Antoine Brodin and Jan
Beich for their help.
Relnotes: yes
MFC r309147:
Pull in r282174 from upstream llvm trunk (by Krzysztof Parzyszek):
[PPC] Set SP after loading data from stack frame, if no red zone is
present
Follow-up to r280705: Make sure that the SP is only restored after
all data is loaded from the stack frame, if there is no red zone.
This completes the fix for
https://llvm.org/bugs/show_bug.cgi?id=26519.
Differential Revision: https://reviews.llvm.org/D24466
Reported by: Mark Millard
PR: 214433
MFC r309149:
Pull in r283060 from upstream llvm trunk (by Hal Finkel):
[PowerPC] Refactor soft-float support, and enable PPC64 soft float
This change enables soft-float for PowerPC64, and also makes
soft-float disable all vector instruction sets for both 32-bit and
64-bit modes. This latter part is necessary because the PPC backend
canonicalizes many Altivec vector types to floating-point types, and
so soft-float breaks scalarization support for many operations. Both
for embedded targets and for operating-system kernels desiring
soft-float support, it seems reasonable that disabling hardware
floating-point also disables vector instructions (embedded targets
without hardware floating point support are unlikely to have Altivec,
etc. and operating system kernels desiring not to use floating-point
registers to lower syscall cost are unlikely to want to use vector
registers either). If someone needs this to work, we'll need to
change the fact that we promote many Altivec operations to act on
v4f32. To make it possible to disable Altivec when soft-float is
enabled, hardware floating-point support needs to be expressed as a
positive feature, like the others, and not a negative feature,
because target features cannot have dependencies on the disabling of
some other feature. So +soft-float has now become -hard-float.
Fixes PR26970.
Pull in r283061 from upstream clang trunk (by Hal Finkel):
[PowerPC] Enable soft-float for PPC64, and +soft-float -> -hard-float
Enable soft-float support on PPC64, as the backend now supports it.
Also, the backend now uses -hard-float instead of +soft-float, so set
the target features accordingly.
Fixes PR26970.
Reported by: Mark Millard
PR: 214433
MFC r309212:
Add a few missed clang 3.9.0 files to OptionalObsoleteFiles.
MFC r309262:
Fix packaging for clang, lldb and lld 3.9.0
During the upgrade of clang/llvm etc to 3.9.0 in r309124, the PACKAGE
directive in the usr.bin/clang/*.mk files got dropped accidentally.
Restore it, with a few minor changes and additions:
* Correct license in clang.ucl to NCSA
* Add PACKAGE=clang for clang and most of the "ll" tools
* Put lldb in its own package
* Put lld in its own package
Reviewed by: gjb, jmallett
Differential Revision: https://reviews.freebsd.org/D8666
MFC r309656:
During the bootstrap phase, when building the minimal llvm library on
PowerPC, add lib/Support/Atomic.cpp. This is needed because upstream
llvm revision r271821 disabled the use of std::call_once, which causes
some fallback functions from Atomic.cpp to be used instead.
Reported by: Mark Millard
PR: 214902
MFC r309835:
Tentatively apply https://reviews.llvm.org/D18730 to work around gcc PR
70528 (bogus error: constructor required before non-static data member).
This should fix buildworld with the external gcc package.
Reported by: https://jenkins.freebsd.org/job/FreeBSD_HEAD_amd64_gcc/
MFC r310194:
Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to
3.9.1 release.
Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11
support to build; see UPDATING for more information.
Release notes for llvm, clang and lld will be available here:
<http://releases.llvm.org/3.9.1/docs/ReleaseNotes.html>
<http://releases.llvm.org/3.9.1/tools/clang/docs/ReleaseNotes.html>
<http://releases.llvm.org/3.9.1/tools/lld/docs/ReleaseNotes.html>
Relnotes: yes
Diffstat (limited to 'contrib/llvm/lib/Target/AMDGPU/R600ControlFlowFinalizer.cpp')
-rw-r--r-- | contrib/llvm/lib/Target/AMDGPU/R600ControlFlowFinalizer.cpp | 158 |
1 files changed, 90 insertions, 68 deletions
diff --git a/contrib/llvm/lib/Target/AMDGPU/R600ControlFlowFinalizer.cpp b/contrib/llvm/lib/Target/AMDGPU/R600ControlFlowFinalizer.cpp index bd80bb2..d5bda4a 100644 --- a/contrib/llvm/lib/Target/AMDGPU/R600ControlFlowFinalizer.cpp +++ b/contrib/llvm/lib/Target/AMDGPU/R600ControlFlowFinalizer.cpp @@ -39,16 +39,16 @@ struct CFStack { FIRST_NON_WQM_PUSH_W_FULL_ENTRY = 3 }; - const AMDGPUSubtarget *ST; + const R600Subtarget *ST; std::vector<StackItem> BranchStack; std::vector<StackItem> LoopStack; unsigned MaxStackSize; unsigned CurrentEntries; unsigned CurrentSubEntries; - CFStack(const AMDGPUSubtarget *st, unsigned ShaderType) : ST(st), + CFStack(const R600Subtarget *st, CallingConv::ID cc) : ST(st), // We need to reserve a stack entry for CALL_FS in vertex shaders. - MaxStackSize(ShaderType == ShaderType::VERTEX ? 1 : 0), + MaxStackSize(cc == CallingConv::AMDGPU_VS ? 1 : 0), CurrentEntries(0), CurrentSubEntries(0) { } unsigned getLoopDepth(); @@ -119,7 +119,7 @@ unsigned CFStack::getSubEntrySize(CFStack::StackItem Item) { return 0; case CFStack::FIRST_NON_WQM_PUSH: assert(!ST->hasCaymanISA()); - if (ST->getGeneration() <= AMDGPUSubtarget::R700) { + if (ST->getGeneration() <= R600Subtarget::R700) { // +1 For the push operation. // +2 Extra space required. return 3; @@ -132,7 +132,7 @@ unsigned CFStack::getSubEntrySize(CFStack::StackItem Item) { return 2; } case CFStack::FIRST_NON_WQM_PUSH_W_FULL_ENTRY: - assert(ST->getGeneration() >= AMDGPUSubtarget::EVERGREEN); + assert(ST->getGeneration() >= R600Subtarget::EVERGREEN); // +1 For the push operation. // +1 Extra space required. return 2; @@ -142,8 +142,8 @@ unsigned CFStack::getSubEntrySize(CFStack::StackItem Item) { } void CFStack::updateMaxStackSize() { - unsigned CurrentStackSize = CurrentEntries + - (RoundUpToAlignment(CurrentSubEntries, 4) / 4); + unsigned CurrentStackSize = + CurrentEntries + (alignTo(CurrentSubEntries, 4) / 4); MaxStackSize = std::max(CurrentStackSize, MaxStackSize); } @@ -159,7 +159,7 @@ void CFStack::pushBranch(unsigned Opcode, bool isWQM) { // See comment in // CFStack::getSubEntrySize() else if (CurrentEntries > 0 && - ST->getGeneration() > AMDGPUSubtarget::EVERGREEN && + ST->getGeneration() > R600Subtarget::EVERGREEN && !ST->hasCaymanISA() && !branchStackContains(CFStack::FIRST_NON_WQM_PUSH_W_FULL_ENTRY)) Item = CFStack::FIRST_NON_WQM_PUSH_W_FULL_ENTRY; @@ -220,10 +220,10 @@ private: const R600InstrInfo *TII; const R600RegisterInfo *TRI; unsigned MaxFetchInst; - const AMDGPUSubtarget *ST; + const R600Subtarget *ST; - bool IsTrivialInst(MachineInstr *MI) const { - switch (MI->getOpcode()) { + bool IsTrivialInst(MachineInstr &MI) const { + switch (MI.getOpcode()) { case AMDGPU::KILL: case AMDGPU::RETURN: return true; @@ -234,7 +234,7 @@ private: const MCInstrDesc &getHWInstrDesc(ControlFlowInstruction CFI) const { unsigned Opcode = 0; - bool isEg = (ST->getGeneration() >= AMDGPUSubtarget::EVERGREEN); + bool isEg = (ST->getGeneration() >= R600Subtarget::EVERGREEN); switch (CFI) { case CF_TC: Opcode = isEg ? AMDGPU::CF_TC_EG : AMDGPU::CF_TC_R600; @@ -278,11 +278,12 @@ private: return TII->get(Opcode); } - bool isCompatibleWithClause(const MachineInstr *MI, - std::set<unsigned> &DstRegs) const { + bool isCompatibleWithClause(const MachineInstr &MI, + std::set<unsigned> &DstRegs) const { unsigned DstMI, SrcMI; - for (MachineInstr::const_mop_iterator I = MI->operands_begin(), - E = MI->operands_end(); I != E; ++I) { + for (MachineInstr::const_mop_iterator I = MI.operands_begin(), + E = MI.operands_end(); + I != E; ++I) { const MachineOperand &MO = *I; if (!MO.isReg()) continue; @@ -318,20 +319,20 @@ private: MachineBasicBlock::iterator ClauseHead = I; std::vector<MachineInstr *> ClauseContent; unsigned AluInstCount = 0; - bool IsTex = TII->usesTextureCache(ClauseHead); + bool IsTex = TII->usesTextureCache(*ClauseHead); std::set<unsigned> DstRegs; for (MachineBasicBlock::iterator E = MBB.end(); I != E; ++I) { - if (IsTrivialInst(I)) + if (IsTrivialInst(*I)) continue; if (AluInstCount >= MaxFetchInst) break; - if ((IsTex && !TII->usesTextureCache(I)) || - (!IsTex && !TII->usesVertexCache(I))) + if ((IsTex && !TII->usesTextureCache(*I)) || + (!IsTex && !TII->usesVertexCache(*I))) break; - if (!isCompatibleWithClause(I, DstRegs)) + if (!isCompatibleWithClause(*I, DstRegs)) break; AluInstCount ++; - ClauseContent.push_back(I); + ClauseContent.push_back(&*I); } MachineInstr *MIb = BuildMI(MBB, ClauseHead, MBB.findDebugLoc(ClauseHead), getHWInstrDesc(IsTex?CF_TC:CF_VC)) @@ -340,28 +341,37 @@ private: return ClauseFile(MIb, std::move(ClauseContent)); } - void getLiteral(MachineInstr *MI, std::vector<int64_t> &Lits) const { + void getLiteral(MachineInstr &MI, std::vector<MachineOperand *> &Lits) const { static const unsigned LiteralRegs[] = { AMDGPU::ALU_LITERAL_X, AMDGPU::ALU_LITERAL_Y, AMDGPU::ALU_LITERAL_Z, AMDGPU::ALU_LITERAL_W }; - const SmallVector<std::pair<MachineOperand *, int64_t>, 3 > Srcs = + const SmallVector<std::pair<MachineOperand *, int64_t>, 3> Srcs = TII->getSrcs(MI); - for (unsigned i = 0, e = Srcs.size(); i < e; ++i) { - if (Srcs[i].first->getReg() != AMDGPU::ALU_LITERAL_X) + for (const auto &Src:Srcs) { + if (Src.first->getReg() != AMDGPU::ALU_LITERAL_X) continue; - int64_t Imm = Srcs[i].second; - std::vector<int64_t>::iterator It = - std::find(Lits.begin(), Lits.end(), Imm); + int64_t Imm = Src.second; + std::vector<MachineOperand*>::iterator It = + std::find_if(Lits.begin(), Lits.end(), + [&](MachineOperand* val) + { return val->isImm() && (val->getImm() == Imm);}); + + // Get corresponding Operand + MachineOperand &Operand = MI.getOperand( + TII->getOperandIdx(MI.getOpcode(), AMDGPU::OpName::literal)); + if (It != Lits.end()) { + // Reuse existing literal reg unsigned Index = It - Lits.begin(); - Srcs[i].first->setReg(LiteralRegs[Index]); + Src.first->setReg(LiteralRegs[Index]); } else { + // Allocate new literal reg assert(Lits.size() < 4 && "Too many literals in Instruction Group"); - Srcs[i].first->setReg(LiteralRegs[Lits.size()]); - Lits.push_back(Imm); + Src.first->setReg(LiteralRegs[Lits.size()]); + Lits.push_back(&Operand); } } } @@ -384,56 +394,66 @@ private: ClauseFile MakeALUClause(MachineBasicBlock &MBB, MachineBasicBlock::iterator &I) const { - MachineBasicBlock::iterator ClauseHead = I; + MachineInstr &ClauseHead = *I; std::vector<MachineInstr *> ClauseContent; I++; for (MachineBasicBlock::instr_iterator E = MBB.instr_end(); I != E;) { - if (IsTrivialInst(I)) { + if (IsTrivialInst(*I)) { ++I; continue; } if (!I->isBundle() && !TII->isALUInstr(I->getOpcode())) break; - std::vector<int64_t> Literals; + std::vector<MachineOperand *>Literals; if (I->isBundle()) { - MachineInstr *DeleteMI = I; + MachineInstr &DeleteMI = *I; MachineBasicBlock::instr_iterator BI = I.getInstrIterator(); while (++BI != E && BI->isBundledWithPred()) { BI->unbundleFromPred(); - for (unsigned i = 0, e = BI->getNumOperands(); i != e; ++i) { - MachineOperand &MO = BI->getOperand(i); + for (MachineOperand &MO : BI->operands()) { if (MO.isReg() && MO.isInternalRead()) MO.setIsInternalRead(false); } - getLiteral(&*BI, Literals); + getLiteral(*BI, Literals); ClauseContent.push_back(&*BI); } I = BI; - DeleteMI->eraseFromParent(); + DeleteMI.eraseFromParent(); } else { - getLiteral(I, Literals); - ClauseContent.push_back(I); + getLiteral(*I, Literals); + ClauseContent.push_back(&*I); I++; } - for (unsigned i = 0, e = Literals.size(); i < e; i+=2) { - unsigned literal0 = Literals[i]; - unsigned literal2 = (i + 1 < e)?Literals[i + 1]:0; - MachineInstr *MILit = BuildMI(MBB, I, I->getDebugLoc(), - TII->get(AMDGPU::LITERALS)) - .addImm(literal0) - .addImm(literal2); + for (unsigned i = 0, e = Literals.size(); i < e; i += 2) { + MachineInstrBuilder MILit = BuildMI(MBB, I, I->getDebugLoc(), + TII->get(AMDGPU::LITERALS)); + if (Literals[i]->isImm()) { + MILit.addImm(Literals[i]->getImm()); + } else { + MILit.addGlobalAddress(Literals[i]->getGlobal(), + Literals[i]->getOffset()); + } + if (i + 1 < e) { + if (Literals[i + 1]->isImm()) { + MILit.addImm(Literals[i + 1]->getImm()); + } else { + MILit.addGlobalAddress(Literals[i + 1]->getGlobal(), + Literals[i + 1]->getOffset()); + } + } else + MILit.addImm(0); ClauseContent.push_back(MILit); } } assert(ClauseContent.size() < 128 && "ALU clause is too big"); - ClauseHead->getOperand(7).setImm(ClauseContent.size() - 1); - return ClauseFile(ClauseHead, std::move(ClauseContent)); + ClauseHead.getOperand(7).setImm(ClauseContent.size() - 1); + return ClauseFile(&ClauseHead, std::move(ClauseContent)); } void EmitFetchClause(MachineBasicBlock::iterator InsertPos, ClauseFile &Clause, unsigned &CfCount) { - CounterPropagateAddr(Clause.first, CfCount); + CounterPropagateAddr(*Clause.first, CfCount); MachineBasicBlock *BB = Clause.first->getParent(); BuildMI(BB, InsertPos->getDebugLoc(), TII->get(AMDGPU::FETCH_CLAUSE)) .addImm(CfCount); @@ -447,7 +467,7 @@ private: EmitALUClause(MachineBasicBlock::iterator InsertPos, ClauseFile &Clause, unsigned &CfCount) { Clause.first->getOperand(0).setImm(0); - CounterPropagateAddr(Clause.first, CfCount); + CounterPropagateAddr(*Clause.first, CfCount); MachineBasicBlock *BB = Clause.first->getParent(); BuildMI(BB, InsertPos->getDebugLoc(), TII->get(AMDGPU::ALU_CLAUSE)) .addImm(CfCount); @@ -457,13 +477,13 @@ private: CfCount += Clause.second.size(); } - void CounterPropagateAddr(MachineInstr *MI, unsigned Addr) const { - MI->getOperand(0).setImm(Addr + MI->getOperand(0).getImm()); + void CounterPropagateAddr(MachineInstr &MI, unsigned Addr) const { + MI.getOperand(0).setImm(Addr + MI.getOperand(0).getImm()); } void CounterPropagateAddr(const std::set<MachineInstr *> &MIs, unsigned Addr) const { for (MachineInstr *MI : MIs) { - CounterPropagateAddr(MI, Addr); + CounterPropagateAddr(*MI, Addr); } } @@ -472,20 +492,21 @@ public: : MachineFunctionPass(ID), TII(nullptr), TRI(nullptr), ST(nullptr) {} bool runOnMachineFunction(MachineFunction &MF) override { - ST = &MF.getSubtarget<AMDGPUSubtarget>(); + ST = &MF.getSubtarget<R600Subtarget>(); MaxFetchInst = ST->getTexVTXClauseSize(); - TII = static_cast<const R600InstrInfo *>(ST->getInstrInfo()); - TRI = static_cast<const R600RegisterInfo *>(ST->getRegisterInfo()); + TII = ST->getInstrInfo(); + TRI = ST->getRegisterInfo(); + R600MachineFunctionInfo *MFI = MF.getInfo<R600MachineFunctionInfo>(); - CFStack CFStack(ST, MFI->getShaderType()); + CFStack CFStack(ST, MF.getFunction()->getCallingConv()); for (MachineFunction::iterator MB = MF.begin(), ME = MF.end(); MB != ME; ++MB) { MachineBasicBlock &MBB = *MB; unsigned CfCount = 0; std::vector<std::pair<unsigned, std::set<MachineInstr *> > > LoopStack; std::vector<MachineInstr * > IfThenElseStack; - if (MFI->getShaderType() == ShaderType::VERTEX) { + if (MF.getFunction()->getCallingConv() == CallingConv::AMDGPU_VS) { BuildMI(MBB, MBB.begin(), MBB.findDebugLoc(MBB.begin()), getHWInstrDesc(CF_CALL_FS)); CfCount++; @@ -493,10 +514,10 @@ public: std::vector<ClauseFile> FetchClauses, AluClauses; std::vector<MachineInstr *> LastAlu(1); std::vector<MachineInstr *> ToPopAfter; - + for (MachineBasicBlock::iterator I = MBB.begin(), E = MBB.end(); I != E;) { - if (TII->usesTextureCache(I) || TII->usesVertexCache(I)) { + if (TII->usesTextureCache(*I) || TII->usesVertexCache(*I)) { DEBUG(dbgs() << CfCount << ":"; I->dump();); FetchClauses.push_back(MakeFetchClause(MBB, I)); CfCount++; @@ -508,7 +529,7 @@ public: if (MI->getOpcode() != AMDGPU::ENDIF) LastAlu.back() = nullptr; if (MI->getOpcode() == AMDGPU::CF_ALU) - LastAlu.back() = MI; + LastAlu.back() = &*MI; I++; bool RequiresWorkAround = CFStack.requiresWorkAroundForInst(MI->getOpcode()); @@ -571,7 +592,7 @@ public: case AMDGPU::ELSE: { MachineInstr * JumpInst = IfThenElseStack.back(); IfThenElseStack.pop_back(); - CounterPropagateAddr(JumpInst, CfCount); + CounterPropagateAddr(*JumpInst, CfCount); MachineInstr *MIb = BuildMI(MBB, MI, MBB.findDebugLoc(MI), getHWInstrDesc(CF_ELSE)) .addImm(0) @@ -595,10 +616,10 @@ public: DEBUG(dbgs() << CfCount << ":"; MIb->dump();); CfCount++; } - + MachineInstr *IfOrElseInst = IfThenElseStack.back(); IfThenElseStack.pop_back(); - CounterPropagateAddr(IfOrElseInst, CfCount); + CounterPropagateAddr(*IfOrElseInst, CfCount); IfOrElseInst->getOperand(1).setImm(1); LastAlu.pop_back(); MI->eraseFromParent(); @@ -625,15 +646,16 @@ public: case AMDGPU::RETURN: { BuildMI(MBB, MI, MBB.findDebugLoc(MI), getHWInstrDesc(CF_END)); CfCount++; - MI->eraseFromParent(); if (CfCount % 2) { BuildMI(MBB, I, MBB.findDebugLoc(MI), TII->get(AMDGPU::PAD)); CfCount++; } + MI->eraseFromParent(); for (unsigned i = 0, e = FetchClauses.size(); i < e; i++) EmitFetchClause(I, FetchClauses[i], CfCount); for (unsigned i = 0, e = AluClauses.size(); i < e; i++) EmitALUClause(I, AluClauses[i], CfCount); + break; } default: if (TII->isExport(MI->getOpcode())) { |