summaryrefslogtreecommitdiffstats
path: root/contrib/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
diff options
context:
space:
mode:
authordim <dim@FreeBSD.org>2017-09-26 19:56:36 +0000
committerLuiz Souza <luiz@netgate.com>2018-02-21 15:12:19 -0300
commit1dcd2e8d24b295bc73e513acec2ed1514bb66be4 (patch)
tree4bd13a34c251e980e1a6b13584ca1f63b0dfe670 /contrib/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
parentf45541ca2a56a1ba1202f94c080b04e96c1fa239 (diff)
downloadFreeBSD-src-1dcd2e8d24b295bc73e513acec2ed1514bb66be4.zip
FreeBSD-src-1dcd2e8d24b295bc73e513acec2ed1514bb66be4.tar.gz
Merge clang, llvm, lld, lldb, compiler-rt and libc++ 5.0.0 release.
MFC r309126 (by emaste): Correct lld llvm-tblgen dependency file name MFC r309169: Get rid of separate Subversion mergeinfo properties for llvm-dwarfdump and llvm-lto. The mergeinfo confuses Subversion enormously, and these directories will just use the mergeinfo for llvm itself. MFC r312765: Pull in r276136 from upstream llvm trunk (by Wei Mi): Use ValueOffsetPair to enhance value reuse during SCEV expansion. In D12090, the ExprValueMap was added to reuse existing value during SCEV expansion. However, const folding and sext/zext distribution can make the reuse still difficult. A simplified case is: suppose we know S1 expands to V1 in ExprValueMap, and S1 = S2 + C_a S3 = S2 + C_b where C_a and C_b are different SCEVConstants. Then we'd like to expand S3 as V1 - C_a + C_b instead of expanding S2 literally. It is helpful when S2 is a complex SCEV expr and S2 has no entry in ExprValueMap, which is usually caused by the fact that S3 is generated from S1 after const folding. In order to do that, we represent ExprValueMap as a mapping from SCEV to ValueOffsetPair. We will save both S1->{V1, 0} and S2->{V1, C_a} into the ExprValueMap when we create SCEV for V1. When S3 is expanded, it will first expand S2 to V1 - C_a because of S2->{V1, C_a} in the map, then expand S3 to V1 - C_a + C_b. Differential Revision: https://reviews.llvm.org/D21313 This should fix assertion failures when building OpenCV >= 3.1. PR: 215649 MFC r312831: Revert r312765 for now, since it causes assertions when building lang/spidermonkey24. Reported by: antoine PR: 215649 MFC r316511 (by jhb): Add an implementation of __ffssi2() derived from __ffsdi2(). Newer versions of GCC include an __ffssi2() symbol in libgcc and the compiler can emit calls to it in generated code. This is true for at least GCC 6.2 when compiling world for mips and mips64. Reviewed by: jmallett, dim Sponsored by: DARPA / AFRL Differential Revision: https://reviews.freebsd.org/D10086 MFC r318601 (by adrian): [libcompiler-rt] add bswapdi2/bswapsi2 This is required for mips gcc 6.3 userland to build/run. Reviewed by: emaste, dim Approved by: emaste Differential Revision: https://reviews.freebsd.org/D10838 MFC r318884 (by emaste): lldb: map TRAP_CAP to a trace trap In the absense of a more specific handler for TRAP_CAP (generated by ENOTCAPABLE or ECAPMODE while in capability mode) treat it as a trace trap. Example usage (testing the bug in PR219173): % proccontrol -m trapcap lldb usr.bin/hexdump/obj/hexdump -- -Cv -s 1 /bin/ls ... (lldb) run Process 12980 launching Process 12980 launched: '.../usr.bin/hexdump/obj/hexdump' (x86_64) Process 12980 stopped * thread #1, stop reason = trace frame #0: 0x0000004b80c65f1a libc.so.7`__sys_lseek + 10 ... In the future we should have LLDB control the trapcap procctl itself (as it does with ASLR), as well as report a specific stop reason. This change eliminates an assertion failure from LLDB for now. MFC r319796: Remove a few unneeded files from libllvm, libclang and liblldb. MFC r319885 (by emaste): lld: ELF: Fix ICF crash on absolute symbol relocations. If two sections contained relocations to absolute symbols with the same value we would crash when trying to access their sections. Add a check that both symbols point to sections before accessing their sections, and treat absolute symbols as equal if their values are equal. Obtained from: LLD commit r292578 MFC r319918: Revert r319796 for now, it can cause undefined references when linking in some circumstances. Reported by: Shawn Webb <shawn.webb@hardenedbsd.org> MFC r319957 (by emaste): lld: Add armelf emulation mode Obtained from: LLD r305375 MFC r321369: Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to 5.0.0 (trunk r308421). Upstream has branched for the 5.0.0 release, which should be in about a month. Please report bugs and regressions, so we can get them into the release. Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11 support to build; see UPDATING for more information. MFC r321420: Add a few more object files to liblldb, which should solve errors when linking the lldb executable in some cases. In particular, when the -ffunction-sections -fdata-sections options are turned off, or ineffective. Reported by: Shawn Webb, Mark Millard MFC r321433: Cleanup stale Options.inc files from the previous libllvm build for clang 4.0.0. Otherwise, these can get included before the two newly generated ones (which are different) for clang 5.0.0. Reported by: Mark Millard MFC r321439 (by bdrewery): Move llvm Options.inc hack from r321433 for NO_CLEAN to lib/clang/libllvm. The files are only ever generated to .OBJDIR, not to WORLDTMP (as a sysroot) and are only ever included from a compilation. So using a beforebuild target here removes the file before the compilation tries to include it. MFC r321664: Pull in r308891 from upstream llvm trunk (by Benjamin Kramer): [CodeGenPrepare] Cut off FindAllMemoryUses if there are too many uses. This avoids excessive compile time. The case I'm looking at is Function.cpp from an old version of LLVM that still had the giant memcmp string matcher in it. Before r308322 this compiled in about 2 minutes, after it, clang takes infinite* time to compile it. With this patch we're at 5 min, which is still bad but this is a pathological case. The cut off at 20 uses was chosen by looking at other cut-offs in LLVM for user scanning. It's probably too high, but does the job and is very unlikely to regress anything. Fixes PR33900. * I'm impatient and aborted after 15 minutes, on the bug report it was killed after 2h. Pull in r308986 from upstream llvm trunk (by Simon Pilgrim): [X86][CGP] Reduce memcmp() expansion to 2 load pairs (PR33914) D35067/rL308322 attempted to support up to 4 load pairs for memcmp inlining which resulted in regressions for some optimized libc memcmp implementations (PR33914). Until we can match these more optimal cases, this patch reduces the memcmp expansion to a maximum of 2 load pairs (which matches what we do for -Os). This patch should be considered for the 5.0.0 release branch as well Differential Revision: https://reviews.llvm.org/D35830 These fix a hang (or extremely long compile time) when building older LLVM ports. Reported by: antoine PR: 219139 MFC r321719: Pull in r309503 from upstream clang trunk (by Richard Smith): PR33902: Invalidate line number cache when adding more text to existing buffer. This led to crashes as the line number cache would report a bogus line number for a line of code, and we'd try to find a nonexistent column within the line when printing diagnostics. This fixes an assertion when building the graphics/champlain port. Reported by: antoine, kwm PR: 219139 MFC r321723: Upgrade our copies of clang, llvm, lld and lldb to r309439 from the upstream release_50 branch. This is just after upstream's 5.0.0-rc1. MFC r322320: Upgrade our copies of clang, llvm and libc++ to r310316 from the upstream release_50 branch. MFC r322326 (by emaste): lldb: Make i386-*-freebsd expression work on JIT path * Enable i386 ABI creation for freebsd * Added an extra argument in ABISysV_i386::PrepareTrivialCall for mmap syscall * Unlike linux, the last argument of mmap is actually 64-bit(off_t). This requires us to push an additional word for the higher order bits. * Prior to this change, ktrace dump will show mmap failures due to invalid argument coming from the 6th mmap argument. Submitted by: Karnajit Wangkhem Differential Revision: https://reviews.llvm.org/D34776 MFC r322360 (by emaste): lldb: Report inferior signals as signals, not exceptions, on FreeBSD This is the FreeBSD equivalent of LLVM r238549. This serves 2 purposes: * LLDB should handle inferior process signals SIGSEGV/SIGILL/SIGBUS/ SIGFPE the way it is suppose to be handled. Prior to this fix these signals will neither create a coredump, nor exit from the debugger or work for signal handling scenario. * eInvalidCrashReason need not report "unknown crash reason" if we have a valid si_signo llvm.org/pr23699 Patch by Karnajit Wangkhem Differential Revision: https://reviews.llvm.org/D35223 Submitted by: Karnajit Wangkhem Obtained from: LLVM r310591 MFC r322474 (by emaste): lld: Add `-z muldefs` option. Obtained from: LLVM r310757 MFC r322740: Upgrade our copies of clang, llvm, lld and libc++ to r311219 from the upstream release_50 branch. MFC r322855: Upgrade our copies of clang, llvm, lldb and compiler-rt to r311606 from the upstream release_50 branch. As of this version, lib/msun's trig test should also work correctly again (see bug 220989 for more information). PR: 220989 MFC r323112: Upgrade our copies of clang, llvm, lldb and compiler-rt to r312293 from the upstream release_50 branch. This corresponds to 5.0.0 rc4. As of this version, the cad/stepcode port should now compile in a more reasonable time on i386 (see bug 221836 for more information). PR: 221836 MFC r323245: Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to 5.0.0 release (upstream r312559). Release notes for llvm, clang and lld will be available here soon: <http://releases.llvm.org/5.0.0/docs/ReleaseNotes.html> <http://releases.llvm.org/5.0.0/tools/clang/docs/ReleaseNotes.html> <http://releases.llvm.org/5.0.0/tools/lld/docs/ReleaseNotes.html> Relnotes: yes (cherry picked from commit 12cd91cf4c6b96a24427c0de5374916f2808d263)
Diffstat (limited to 'contrib/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp')
-rw-r--r--contrib/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp899
1 files changed, 676 insertions, 223 deletions
diff --git a/contrib/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp b/contrib/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
index 89a042f..ed1bd99 100644
--- a/contrib/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
+++ b/contrib/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
@@ -1,4 +1,4 @@
-//===-- llvm/CodeGen/GlobalISel/IRTranslator.cpp - IRTranslator --*- C++ -*-==//
+//===- llvm/CodeGen/GlobalISel/IRTranslator.cpp - IRTranslator ---*- C++ -*-==//
//
// The LLVM Compiler Infrastructure
//
@@ -11,43 +11,93 @@
//===----------------------------------------------------------------------===//
#include "llvm/CodeGen/GlobalISel/IRTranslator.h"
-
+#include "llvm/ADT/STLExtras.h"
+#include "llvm/ADT/ScopeExit.h"
+#include "llvm/ADT/SmallSet.h"
#include "llvm/ADT/SmallVector.h"
-#include "llvm/CodeGen/GlobalISel/CallLowering.h"
+#include "llvm/Analysis/OptimizationDiagnosticInfo.h"
#include "llvm/CodeGen/Analysis.h"
-#include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/CodeGen/GlobalISel/CallLowering.h"
+#include "llvm/CodeGen/LowLevelType.h"
+#include "llvm/CodeGen/MachineBasicBlock.h"
#include "llvm/CodeGen/MachineFrameInfo.h"
-#include "llvm/CodeGen/MachineModuleInfo.h"
+#include "llvm/CodeGen/MachineFunction.h"
+#include "llvm/CodeGen/MachineInstrBuilder.h"
+#include "llvm/CodeGen/MachineMemOperand.h"
+#include "llvm/CodeGen/MachineOperand.h"
#include "llvm/CodeGen/MachineRegisterInfo.h"
#include "llvm/CodeGen/TargetPassConfig.h"
+#include "llvm/IR/BasicBlock.h"
#include "llvm/IR/Constant.h"
+#include "llvm/IR/Constants.h"
+#include "llvm/IR/DataLayout.h"
+#include "llvm/IR/DebugInfo.h"
+#include "llvm/IR/DerivedTypes.h"
#include "llvm/IR/Function.h"
#include "llvm/IR/GetElementPtrTypeIterator.h"
+#include "llvm/IR/InlineAsm.h"
+#include "llvm/IR/InstrTypes.h"
+#include "llvm/IR/Instructions.h"
#include "llvm/IR/IntrinsicInst.h"
+#include "llvm/IR/Intrinsics.h"
+#include "llvm/IR/LLVMContext.h"
+#include "llvm/IR/Metadata.h"
#include "llvm/IR/Type.h"
+#include "llvm/IR/User.h"
#include "llvm/IR/Value.h"
+#include "llvm/MC/MCContext.h"
+#include "llvm/Pass.h"
+#include "llvm/Support/Casting.h"
+#include "llvm/Support/CodeGen.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Support/ErrorHandling.h"
+#include "llvm/Support/LowLevelTypeImpl.h"
+#include "llvm/Support/MathExtras.h"
+#include "llvm/Support/raw_ostream.h"
+#include "llvm/Target/TargetFrameLowering.h"
#include "llvm/Target/TargetIntrinsicInfo.h"
#include "llvm/Target/TargetLowering.h"
+#include "llvm/Target/TargetMachine.h"
+#include "llvm/Target/TargetRegisterInfo.h"
+#include "llvm/Target/TargetSubtargetInfo.h"
+#include <algorithm>
+#include <cassert>
+#include <cstdint>
+#include <iterator>
+#include <string>
+#include <utility>
+#include <vector>
#define DEBUG_TYPE "irtranslator"
using namespace llvm;
char IRTranslator::ID = 0;
+
INITIALIZE_PASS_BEGIN(IRTranslator, DEBUG_TYPE, "IRTranslator LLVM IR -> MI",
false, false)
INITIALIZE_PASS_DEPENDENCY(TargetPassConfig)
INITIALIZE_PASS_END(IRTranslator, DEBUG_TYPE, "IRTranslator LLVM IR -> MI",
false, false)
-static void reportTranslationError(const Value &V, const Twine &Message) {
- std::string ErrStorage;
- raw_string_ostream Err(ErrStorage);
- Err << Message << ": " << V << '\n';
- report_fatal_error(Err.str());
+static void reportTranslationError(MachineFunction &MF,
+ const TargetPassConfig &TPC,
+ OptimizationRemarkEmitter &ORE,
+ OptimizationRemarkMissed &R) {
+ MF.getProperties().set(MachineFunctionProperties::Property::FailedISel);
+
+ // Print the function name explicitly if we don't have a debug location (which
+ // makes the diagnostic less useful) or if we're going to emit a raw error.
+ if (!R.getLocation().isValid() || TPC.isGlobalISelAbortEnabled())
+ R << (" (in function: " + MF.getName() + ")").str();
+
+ if (TPC.isGlobalISelAbortEnabled())
+ report_fatal_error(R.getMsg());
+ else
+ ORE.emit(R);
}
-IRTranslator::IRTranslator() : MachineFunctionPass(ID), MRI(nullptr) {
+IRTranslator::IRTranslator() : MachineFunctionPass(ID) {
initializeIRTranslatorPass(*PassRegistry::getPassRegistry());
}
@@ -56,31 +106,33 @@ void IRTranslator::getAnalysisUsage(AnalysisUsage &AU) const {
MachineFunctionPass::getAnalysisUsage(AU);
}
-
unsigned IRTranslator::getOrCreateVReg(const Value &Val) {
unsigned &ValReg = ValToVReg[&Val];
- // Check if this is the first time we see Val.
- if (!ValReg) {
- // Fill ValRegsSequence with the sequence of registers
- // we need to concat together to produce the value.
- assert(Val.getType()->isSized() &&
- "Don't know how to create an empty vreg");
- unsigned VReg = MRI->createGenericVirtualRegister(LLT{*Val.getType(), *DL});
- ValReg = VReg;
-
- if (auto CV = dyn_cast<Constant>(&Val)) {
- bool Success = translate(*CV, VReg);
- if (!Success) {
- if (!TPC->isGlobalISelAbortEnabled()) {
- MF->getProperties().set(
- MachineFunctionProperties::Property::FailedISel);
- return VReg;
- }
- reportTranslationError(Val, "unable to translate constant");
- }
+
+ if (ValReg)
+ return ValReg;
+
+ // Fill ValRegsSequence with the sequence of registers
+ // we need to concat together to produce the value.
+ assert(Val.getType()->isSized() &&
+ "Don't know how to create an empty vreg");
+ unsigned VReg =
+ MRI->createGenericVirtualRegister(getLLTForType(*Val.getType(), *DL));
+ ValReg = VReg;
+
+ if (auto CV = dyn_cast<Constant>(&Val)) {
+ bool Success = translate(*CV, VReg);
+ if (!Success) {
+ OptimizationRemarkMissed R("gisel-irtranslator", "GISelFailure",
+ MF->getFunction()->getSubprogram(),
+ &MF->getFunction()->getEntryBlock());
+ R << "unable to translate constant: " << ore::NV("Type", Val.getType());
+ reportTranslationError(*MF, *TPC, *ORE, R);
+ return VReg;
}
}
- return ValReg;
+
+ return VReg;
}
int IRTranslator::getOrCreateFrameIndex(const AllocaInst &AI) {
@@ -112,28 +164,27 @@ unsigned IRTranslator::getMemOpAlignment(const Instruction &I) {
} else if (const LoadInst *LI = dyn_cast<LoadInst>(&I)) {
Alignment = LI->getAlignment();
ValTy = LI->getType();
- } else if (!TPC->isGlobalISelAbortEnabled()) {
- MF->getProperties().set(
- MachineFunctionProperties::Property::FailedISel);
+ } else {
+ OptimizationRemarkMissed R("gisel-irtranslator", "", &I);
+ R << "unable to translate memop: " << ore::NV("Opcode", &I);
+ reportTranslationError(*MF, *TPC, *ORE, R);
return 1;
- } else
- llvm_unreachable("unhandled memory instruction");
+ }
return Alignment ? Alignment : DL->getABITypeAlignment(ValTy);
}
-MachineBasicBlock &IRTranslator::getOrCreateBB(const BasicBlock &BB) {
+MachineBasicBlock &IRTranslator::getMBB(const BasicBlock &BB) {
MachineBasicBlock *&MBB = BBToMBB[&BB];
- if (!MBB) {
- MBB = MF->CreateMachineBasicBlock(&BB);
- MF->push_back(MBB);
-
- if (BB.hasAddressTaken())
- MBB->setHasAddressTaken();
- }
+ assert(MBB && "BasicBlock was not encountered before");
return *MBB;
}
+void IRTranslator::addMachineCFGPred(CFGEdge Edge, MachineBasicBlock *NewPred) {
+ assert(NewPred && "new predecessor must be a real MachineBasicBlock");
+ MachinePreds[Edge].push_back(NewPred);
+}
+
bool IRTranslator::translateBinaryOp(unsigned Opcode, const User &U,
MachineIRBuilder &MIRBuilder) {
// FIXME: handle signed/unsigned wrapping flags.
@@ -149,6 +200,18 @@ bool IRTranslator::translateBinaryOp(unsigned Opcode, const User &U,
return true;
}
+bool IRTranslator::translateFSub(const User &U, MachineIRBuilder &MIRBuilder) {
+ // -0.0 - X --> G_FNEG
+ if (isa<Constant>(U.getOperand(0)) &&
+ U.getOperand(0) == ConstantFP::getZeroValueForNegation(U.getType())) {
+ MIRBuilder.buildInstr(TargetOpcode::G_FNEG)
+ .addDef(getOrCreateVReg(U))
+ .addUse(getOrCreateVReg(*U.getOperand(1)));
+ return true;
+ }
+ return translateBinaryOp(TargetOpcode::G_FSUB, U, MIRBuilder);
+}
+
bool IRTranslator::translateCompare(const User &U,
MachineIRBuilder &MIRBuilder) {
const CmpInst *CI = dyn_cast<CmpInst>(&U);
@@ -158,9 +221,14 @@ bool IRTranslator::translateCompare(const User &U,
CmpInst::Predicate Pred =
CI ? CI->getPredicate() : static_cast<CmpInst::Predicate>(
cast<ConstantExpr>(U).getPredicate());
-
if (CmpInst::isIntPredicate(Pred))
MIRBuilder.buildICmp(Pred, Res, Op0, Op1);
+ else if (Pred == CmpInst::FCMP_FALSE)
+ MIRBuilder.buildCopy(
+ Res, getOrCreateVReg(*Constant::getNullValue(CI->getType())));
+ else if (Pred == CmpInst::FCMP_TRUE)
+ MIRBuilder.buildCopy(
+ Res, getOrCreateVReg(*Constant::getAllOnesValue(CI->getType())));
else
MIRBuilder.buildFCmp(Pred, Res, Op0, Op1);
@@ -183,18 +251,21 @@ bool IRTranslator::translateBr(const User &U, MachineIRBuilder &MIRBuilder) {
// We want a G_BRCOND to the true BB followed by an unconditional branch.
unsigned Tst = getOrCreateVReg(*BrInst.getCondition());
const BasicBlock &TrueTgt = *cast<BasicBlock>(BrInst.getSuccessor(Succ++));
- MachineBasicBlock &TrueBB = getOrCreateBB(TrueTgt);
+ MachineBasicBlock &TrueBB = getMBB(TrueTgt);
MIRBuilder.buildBrCond(Tst, TrueBB);
}
const BasicBlock &BrTgt = *cast<BasicBlock>(BrInst.getSuccessor(Succ));
- MachineBasicBlock &TgtBB = getOrCreateBB(BrTgt);
- MIRBuilder.buildBr(TgtBB);
+ MachineBasicBlock &TgtBB = getMBB(BrTgt);
+ MachineBasicBlock &CurBB = MIRBuilder.getMBB();
+
+ // If the unconditional target is the layout successor, fallthrough.
+ if (!CurBB.isLayoutSuccessor(&TgtBB))
+ MIRBuilder.buildBr(TgtBB);
// Link successors.
- MachineBasicBlock &CurBB = MIRBuilder.getMBB();
for (const BasicBlock *Succ : BrInst.successors())
- CurBB.addSuccessor(&getOrCreateBB(*Succ));
+ CurBB.addSuccessor(&getMBB(*Succ));
return true;
}
@@ -209,30 +280,52 @@ bool IRTranslator::translateSwitch(const User &U,
const SwitchInst &SwInst = cast<SwitchInst>(U);
const unsigned SwCondValue = getOrCreateVReg(*SwInst.getCondition());
+ const BasicBlock *OrigBB = SwInst.getParent();
- LLT LLTi1 = LLT(*Type::getInt1Ty(U.getContext()), *DL);
+ LLT LLTi1 = getLLTForType(*Type::getInt1Ty(U.getContext()), *DL);
for (auto &CaseIt : SwInst.cases()) {
const unsigned CaseValueReg = getOrCreateVReg(*CaseIt.getCaseValue());
const unsigned Tst = MRI->createGenericVirtualRegister(LLTi1);
MIRBuilder.buildICmp(CmpInst::ICMP_EQ, Tst, CaseValueReg, SwCondValue);
- MachineBasicBlock &CurBB = MIRBuilder.getMBB();
- MachineBasicBlock &TrueBB = getOrCreateBB(*CaseIt.getCaseSuccessor());
+ MachineBasicBlock &CurMBB = MIRBuilder.getMBB();
+ const BasicBlock *TrueBB = CaseIt.getCaseSuccessor();
+ MachineBasicBlock &TrueMBB = getMBB(*TrueBB);
- MIRBuilder.buildBrCond(Tst, TrueBB);
- CurBB.addSuccessor(&TrueBB);
+ MIRBuilder.buildBrCond(Tst, TrueMBB);
+ CurMBB.addSuccessor(&TrueMBB);
+ addMachineCFGPred({OrigBB, TrueBB}, &CurMBB);
- MachineBasicBlock *FalseBB =
+ MachineBasicBlock *FalseMBB =
MF->CreateMachineBasicBlock(SwInst.getParent());
- MF->push_back(FalseBB);
- MIRBuilder.buildBr(*FalseBB);
- CurBB.addSuccessor(FalseBB);
+ // Insert the comparison blocks one after the other.
+ MF->insert(std::next(CurMBB.getIterator()), FalseMBB);
+ MIRBuilder.buildBr(*FalseMBB);
+ CurMBB.addSuccessor(FalseMBB);
- MIRBuilder.setMBB(*FalseBB);
+ MIRBuilder.setMBB(*FalseMBB);
}
// handle default case
- MachineBasicBlock &DefaultBB = getOrCreateBB(*SwInst.getDefaultDest());
- MIRBuilder.buildBr(DefaultBB);
- MIRBuilder.getMBB().addSuccessor(&DefaultBB);
+ const BasicBlock *DefaultBB = SwInst.getDefaultDest();
+ MachineBasicBlock &DefaultMBB = getMBB(*DefaultBB);
+ MIRBuilder.buildBr(DefaultMBB);
+ MachineBasicBlock &CurMBB = MIRBuilder.getMBB();
+ CurMBB.addSuccessor(&DefaultMBB);
+ addMachineCFGPred({OrigBB, DefaultBB}, &CurMBB);
+
+ return true;
+}
+
+bool IRTranslator::translateIndirectBr(const User &U,
+ MachineIRBuilder &MIRBuilder) {
+ const IndirectBrInst &BrInst = cast<IndirectBrInst>(U);
+
+ const unsigned Tgt = getOrCreateVReg(*BrInst.getAddress());
+ MIRBuilder.buildBrIndirect(Tgt);
+
+ // Link successors.
+ MachineBasicBlock &CurBB = MIRBuilder.getMBB();
+ for (const BasicBlock *Succ : BrInst.successors())
+ CurBB.addSuccessor(&getMBB(*Succ));
return true;
}
@@ -240,47 +333,38 @@ bool IRTranslator::translateSwitch(const User &U,
bool IRTranslator::translateLoad(const User &U, MachineIRBuilder &MIRBuilder) {
const LoadInst &LI = cast<LoadInst>(U);
- if (!TPC->isGlobalISelAbortEnabled() && LI.isAtomic())
- return false;
-
- assert(!LI.isAtomic() && "only non-atomic loads are supported at the moment");
auto Flags = LI.isVolatile() ? MachineMemOperand::MOVolatile
: MachineMemOperand::MONone;
Flags |= MachineMemOperand::MOLoad;
unsigned Res = getOrCreateVReg(LI);
unsigned Addr = getOrCreateVReg(*LI.getPointerOperand());
- LLT VTy{*LI.getType(), *DL}, PTy{*LI.getPointerOperand()->getType(), *DL};
+
MIRBuilder.buildLoad(
Res, Addr,
*MF->getMachineMemOperand(MachinePointerInfo(LI.getPointerOperand()),
Flags, DL->getTypeStoreSize(LI.getType()),
- getMemOpAlignment(LI)));
+ getMemOpAlignment(LI), AAMDNodes(), nullptr,
+ LI.getSyncScopeID(), LI.getOrdering()));
return true;
}
bool IRTranslator::translateStore(const User &U, MachineIRBuilder &MIRBuilder) {
const StoreInst &SI = cast<StoreInst>(U);
-
- if (!TPC->isGlobalISelAbortEnabled() && SI.isAtomic())
- return false;
-
- assert(!SI.isAtomic() && "only non-atomic stores supported at the moment");
auto Flags = SI.isVolatile() ? MachineMemOperand::MOVolatile
: MachineMemOperand::MONone;
Flags |= MachineMemOperand::MOStore;
unsigned Val = getOrCreateVReg(*SI.getValueOperand());
unsigned Addr = getOrCreateVReg(*SI.getPointerOperand());
- LLT VTy{*SI.getValueOperand()->getType(), *DL},
- PTy{*SI.getPointerOperand()->getType(), *DL};
MIRBuilder.buildStore(
Val, Addr,
*MF->getMachineMemOperand(
MachinePointerInfo(SI.getPointerOperand()), Flags,
DL->getTypeStoreSize(SI.getValueOperand()->getType()),
- getMemOpAlignment(SI)));
+ getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSyncScopeID(),
+ SI.getOrdering()));
return true;
}
@@ -290,6 +374,15 @@ bool IRTranslator::translateExtractValue(const User &U,
Type *Int32Ty = Type::getInt32Ty(U.getContext());
SmallVector<Value *, 1> Indices;
+ // If Src is a single element ConstantStruct, translate extractvalue
+ // to that element to avoid inserting a cast instruction.
+ if (auto CS = dyn_cast<ConstantStruct>(Src))
+ if (CS->getNumOperands() == 1) {
+ unsigned Res = getOrCreateVReg(*CS->getOperand(0));
+ ValToVReg[&U] = Res;
+ return true;
+ }
+
// getIndexedOffsetInType is designed for GEPs, so the first index is the
// usual array element rather than looking into the actual aggregate.
Indices.push_back(ConstantInt::get(Int32Ty, 0));
@@ -305,7 +398,7 @@ bool IRTranslator::translateExtractValue(const User &U,
uint64_t Offset = 8 * DL->getIndexedOffsetInType(Src->getType(), Indices);
unsigned Res = getOrCreateVReg(U);
- MIRBuilder.buildExtract(Res, Offset, getOrCreateVReg(*Src));
+ MIRBuilder.buildExtract(Res, getOrCreateVReg(*Src), Offset);
return true;
}
@@ -331,29 +424,36 @@ bool IRTranslator::translateInsertValue(const User &U,
uint64_t Offset = 8 * DL->getIndexedOffsetInType(Src->getType(), Indices);
unsigned Res = getOrCreateVReg(U);
- const Value &Inserted = *U.getOperand(1);
- MIRBuilder.buildInsert(Res, getOrCreateVReg(*Src), getOrCreateVReg(Inserted),
- Offset);
+ unsigned Inserted = getOrCreateVReg(*U.getOperand(1));
+ MIRBuilder.buildInsert(Res, getOrCreateVReg(*Src), Inserted, Offset);
return true;
}
bool IRTranslator::translateSelect(const User &U,
MachineIRBuilder &MIRBuilder) {
- MIRBuilder.buildSelect(getOrCreateVReg(U), getOrCreateVReg(*U.getOperand(0)),
- getOrCreateVReg(*U.getOperand(1)),
- getOrCreateVReg(*U.getOperand(2)));
+ unsigned Res = getOrCreateVReg(U);
+ unsigned Tst = getOrCreateVReg(*U.getOperand(0));
+ unsigned Op0 = getOrCreateVReg(*U.getOperand(1));
+ unsigned Op1 = getOrCreateVReg(*U.getOperand(2));
+ MIRBuilder.buildSelect(Res, Tst, Op0, Op1);
return true;
}
bool IRTranslator::translateBitCast(const User &U,
MachineIRBuilder &MIRBuilder) {
- if (LLT{*U.getOperand(0)->getType(), *DL} == LLT{*U.getType(), *DL}) {
+ // If we're bitcasting to the source type, we can reuse the source vreg.
+ if (getLLTForType(*U.getOperand(0)->getType(), *DL) ==
+ getLLTForType(*U.getType(), *DL)) {
+ // Get the source vreg now, to avoid invalidating ValToVReg.
+ unsigned SrcReg = getOrCreateVReg(*U.getOperand(0));
unsigned &Reg = ValToVReg[&U];
+ // If we already assigned a vreg for this bitcast, we can't change that.
+ // Emit a copy to satisfy the users we already emitted.
if (Reg)
- MIRBuilder.buildCopy(Reg, getOrCreateVReg(*U.getOperand(0)));
+ MIRBuilder.buildCopy(Reg, SrcReg);
else
- Reg = getOrCreateVReg(*U.getOperand(0));
+ Reg = SrcReg;
return true;
}
return translateCast(TargetOpcode::G_BITCAST, U, MIRBuilder);
@@ -375,9 +475,10 @@ bool IRTranslator::translateGetElementPtr(const User &U,
Value &Op0 = *U.getOperand(0);
unsigned BaseReg = getOrCreateVReg(Op0);
- LLT PtrTy{*Op0.getType(), *DL};
- unsigned PtrSize = DL->getPointerSizeInBits(PtrTy.getAddressSpace());
- LLT OffsetTy = LLT::scalar(PtrSize);
+ Type *PtrIRTy = Op0.getType();
+ LLT PtrTy = getLLTForType(*PtrIRTy, *DL);
+ Type *OffsetIRTy = DL->getIntPtrType(PtrIRTy);
+ LLT OffsetTy = getLLTForType(*OffsetIRTy, *DL);
int64_t Offset = 0;
for (gep_type_iterator GTI = gep_type_begin(&U), E = gep_type_end(&U);
@@ -399,8 +500,8 @@ bool IRTranslator::translateGetElementPtr(const User &U,
if (Offset != 0) {
unsigned NewBaseReg = MRI->createGenericVirtualRegister(PtrTy);
- unsigned OffsetReg = MRI->createGenericVirtualRegister(OffsetTy);
- MIRBuilder.buildConstant(OffsetReg, Offset);
+ unsigned OffsetReg =
+ getOrCreateVReg(*ConstantInt::get(OffsetIRTy, Offset));
MIRBuilder.buildGEP(NewBaseReg, BaseReg, OffsetReg);
BaseReg = NewBaseReg;
@@ -408,8 +509,8 @@ bool IRTranslator::translateGetElementPtr(const User &U,
}
// N = N + Idx * ElementSize;
- unsigned ElementSizeReg = MRI->createGenericVirtualRegister(OffsetTy);
- MIRBuilder.buildConstant(ElementSizeReg, ElementSize);
+ unsigned ElementSizeReg =
+ getOrCreateVReg(*ConstantInt::get(OffsetIRTy, ElementSize));
unsigned IdxReg = getOrCreateVReg(*Idx);
if (MRI->getType(IdxReg) != OffsetTy) {
@@ -428,8 +529,7 @@ bool IRTranslator::translateGetElementPtr(const User &U,
}
if (Offset != 0) {
- unsigned OffsetReg = MRI->createGenericVirtualRegister(OffsetTy);
- MIRBuilder.buildConstant(OffsetReg, Offset);
+ unsigned OffsetReg = getOrCreateVReg(*ConstantInt::get(OffsetIRTy, Offset));
MIRBuilder.buildGEP(getOrCreateVReg(U), BaseReg, OffsetReg);
return true;
}
@@ -438,13 +538,12 @@ bool IRTranslator::translateGetElementPtr(const User &U,
return true;
}
-bool IRTranslator::translateMemcpy(const CallInst &CI,
- MachineIRBuilder &MIRBuilder) {
- LLT SizeTy{*CI.getArgOperand(2)->getType(), *DL};
- if (cast<PointerType>(CI.getArgOperand(0)->getType())->getAddressSpace() !=
- 0 ||
- cast<PointerType>(CI.getArgOperand(1)->getType())->getAddressSpace() !=
- 0 ||
+bool IRTranslator::translateMemfunc(const CallInst &CI,
+ MachineIRBuilder &MIRBuilder,
+ unsigned ID) {
+ LLT SizeTy = getLLTForType(*CI.getArgOperand(2)->getType(), *DL);
+ Type *DstTy = CI.getArgOperand(0)->getType();
+ if (cast<PointerType>(DstTy)->getAddressSpace() != 0 ||
SizeTy.getSizeInBits() != DL->getPointerSizeInBits(0))
return false;
@@ -454,14 +553,32 @@ bool IRTranslator::translateMemcpy(const CallInst &CI,
Args.emplace_back(getOrCreateVReg(*Arg), Arg->getType());
}
- MachineOperand Callee = MachineOperand::CreateES("memcpy");
+ const char *Callee;
+ switch (ID) {
+ case Intrinsic::memmove:
+ case Intrinsic::memcpy: {
+ Type *SrcTy = CI.getArgOperand(1)->getType();
+ if(cast<PointerType>(SrcTy)->getAddressSpace() != 0)
+ return false;
+ Callee = ID == Intrinsic::memcpy ? "memcpy" : "memmove";
+ break;
+ }
+ case Intrinsic::memset:
+ Callee = "memset";
+ break;
+ default:
+ return false;
+ }
- return CLI->lowerCall(MIRBuilder, Callee,
+ return CLI->lowerCall(MIRBuilder, CI.getCallingConv(),
+ MachineOperand::CreateES(Callee),
CallLowering::ArgInfo(0, CI.getType()), Args);
}
void IRTranslator::getStackGuard(unsigned DstReg,
MachineIRBuilder &MIRBuilder) {
+ const TargetRegisterInfo *TRI = MF->getSubtarget().getRegisterInfo();
+ MRI->setRegClass(DstReg, TRI->getPointerRegClass(*MF));
auto MIB = MIRBuilder.buildInstr(TargetOpcode::LOAD_STACK_GUARD);
MIB.addDef(DstReg);
@@ -482,7 +599,7 @@ void IRTranslator::getStackGuard(unsigned DstReg,
bool IRTranslator::translateOverflowIntrinsic(const CallInst &CI, unsigned Op,
MachineIRBuilder &MIRBuilder) {
- LLT Ty{*CI.getOperand(0)->getType(), *DL};
+ LLT Ty = getLLTForType(*CI.getOperand(0)->getType(), *DL);
LLT s1 = LLT::scalar(1);
unsigned Width = Ty.getSizeInBits();
unsigned Res = MRI->createGenericVirtualRegister(Ty);
@@ -494,12 +611,12 @@ bool IRTranslator::translateOverflowIntrinsic(const CallInst &CI, unsigned Op,
.addUse(getOrCreateVReg(*CI.getOperand(1)));
if (Op == TargetOpcode::G_UADDE || Op == TargetOpcode::G_USUBE) {
- unsigned Zero = MRI->createGenericVirtualRegister(s1);
- EntryBuilder.buildConstant(Zero, 0);
+ unsigned Zero = getOrCreateVReg(
+ *Constant::getNullValue(Type::getInt1Ty(CI.getContext())));
MIB.addUse(Zero);
}
- MIRBuilder.buildSequence(getOrCreateVReg(CI), Res, 0, Overflow, Width);
+ MIRBuilder.buildSequence(getOrCreateVReg(CI), {Res, Overflow}, {0, Width});
return true;
}
@@ -508,12 +625,83 @@ bool IRTranslator::translateKnownIntrinsic(const CallInst &CI, Intrinsic::ID ID,
switch (ID) {
default:
break;
- case Intrinsic::dbg_declare:
- case Intrinsic::dbg_value:
- // FIXME: these obviously need to be supported properly.
- MF->getProperties().set(
- MachineFunctionProperties::Property::FailedISel);
+ case Intrinsic::lifetime_start:
+ case Intrinsic::lifetime_end:
+ // Stack coloring is not enabled in O0 (which we care about now) so we can
+ // drop these. Make sure someone notices when we start compiling at higher
+ // opts though.
+ if (MF->getTarget().getOptLevel() != CodeGenOpt::None)
+ return false;
+ return true;
+ case Intrinsic::dbg_declare: {
+ const DbgDeclareInst &DI = cast<DbgDeclareInst>(CI);
+ assert(DI.getVariable() && "Missing variable");
+
+ const Value *Address = DI.getAddress();
+ if (!Address || isa<UndefValue>(Address)) {
+ DEBUG(dbgs() << "Dropping debug info for " << DI << "\n");
+ return true;
+ }
+
+ assert(DI.getVariable()->isValidLocationForIntrinsic(
+ MIRBuilder.getDebugLoc()) &&
+ "Expected inlined-at fields to agree");
+ auto AI = dyn_cast<AllocaInst>(Address);
+ if (AI && AI->isStaticAlloca()) {
+ // Static allocas are tracked at the MF level, no need for DBG_VALUE
+ // instructions (in fact, they get ignored if they *do* exist).
+ MF->setVariableDbgInfo(DI.getVariable(), DI.getExpression(),
+ getOrCreateFrameIndex(*AI), DI.getDebugLoc());
+ } else
+ MIRBuilder.buildDirectDbgValue(getOrCreateVReg(*Address),
+ DI.getVariable(), DI.getExpression());
+ return true;
+ }
+ case Intrinsic::vaend:
+ // No target I know of cares about va_end. Certainly no in-tree target
+ // does. Simplest intrinsic ever!
+ return true;
+ case Intrinsic::vastart: {
+ auto &TLI = *MF->getSubtarget().getTargetLowering();
+ Value *Ptr = CI.getArgOperand(0);
+ unsigned ListSize = TLI.getVaListSizeInBits(*DL) / 8;
+
+ MIRBuilder.buildInstr(TargetOpcode::G_VASTART)
+ .addUse(getOrCreateVReg(*Ptr))
+ .addMemOperand(MF->getMachineMemOperand(
+ MachinePointerInfo(Ptr), MachineMemOperand::MOStore, ListSize, 0));
+ return true;
+ }
+ case Intrinsic::dbg_value: {
+ // This form of DBG_VALUE is target-independent.
+ const DbgValueInst &DI = cast<DbgValueInst>(CI);
+ const Value *V = DI.getValue();
+ assert(DI.getVariable()->isValidLocationForIntrinsic(
+ MIRBuilder.getDebugLoc()) &&
+ "Expected inlined-at fields to agree");
+ if (!V) {
+ // Currently the optimizer can produce this; insert an undef to
+ // help debugging. Probably the optimizer should not do this.
+ MIRBuilder.buildIndirectDbgValue(0, DI.getOffset(), DI.getVariable(),
+ DI.getExpression());
+ } else if (const auto *CI = dyn_cast<Constant>(V)) {
+ MIRBuilder.buildConstDbgValue(*CI, DI.getOffset(), DI.getVariable(),
+ DI.getExpression());
+ } else {
+ unsigned Reg = getOrCreateVReg(*V);
+ // FIXME: This does not handle register-indirect values at offset 0. The
+ // direct/indirect thing shouldn't really be handled by something as
+ // implicit as reg+noreg vs reg+imm in the first palce, but it seems
+ // pretty baked in right now.
+ if (DI.getOffset() != 0)
+ MIRBuilder.buildIndirectDbgValue(Reg, DI.getOffset(), DI.getVariable(),
+ DI.getExpression());
+ else
+ MIRBuilder.buildDirectDbgValue(Reg, DI.getVariable(),
+ DI.getExpression());
+ }
return true;
+ }
case Intrinsic::uadd_with_overflow:
return translateOverflowIntrinsic(CI, TargetOpcode::G_UADDE, MIRBuilder);
case Intrinsic::sadd_with_overflow:
@@ -526,8 +714,43 @@ bool IRTranslator::translateKnownIntrinsic(const CallInst &CI, Intrinsic::ID ID,
return translateOverflowIntrinsic(CI, TargetOpcode::G_UMULO, MIRBuilder);
case Intrinsic::smul_with_overflow:
return translateOverflowIntrinsic(CI, TargetOpcode::G_SMULO, MIRBuilder);
+ case Intrinsic::pow:
+ MIRBuilder.buildInstr(TargetOpcode::G_FPOW)
+ .addDef(getOrCreateVReg(CI))
+ .addUse(getOrCreateVReg(*CI.getArgOperand(0)))
+ .addUse(getOrCreateVReg(*CI.getArgOperand(1)));
+ return true;
+ case Intrinsic::exp:
+ MIRBuilder.buildInstr(TargetOpcode::G_FEXP)
+ .addDef(getOrCreateVReg(CI))
+ .addUse(getOrCreateVReg(*CI.getArgOperand(0)));
+ return true;
+ case Intrinsic::exp2:
+ MIRBuilder.buildInstr(TargetOpcode::G_FEXP2)
+ .addDef(getOrCreateVReg(CI))
+ .addUse(getOrCreateVReg(*CI.getArgOperand(0)));
+ return true;
+ case Intrinsic::log:
+ MIRBuilder.buildInstr(TargetOpcode::G_FLOG)
+ .addDef(getOrCreateVReg(CI))
+ .addUse(getOrCreateVReg(*CI.getArgOperand(0)));
+ return true;
+ case Intrinsic::log2:
+ MIRBuilder.buildInstr(TargetOpcode::G_FLOG2)
+ .addDef(getOrCreateVReg(CI))
+ .addUse(getOrCreateVReg(*CI.getArgOperand(0)));
+ return true;
+ case Intrinsic::fma:
+ MIRBuilder.buildInstr(TargetOpcode::G_FMA)
+ .addDef(getOrCreateVReg(CI))
+ .addUse(getOrCreateVReg(*CI.getArgOperand(0)))
+ .addUse(getOrCreateVReg(*CI.getArgOperand(1)))
+ .addUse(getOrCreateVReg(*CI.getArgOperand(2)));
+ return true;
case Intrinsic::memcpy:
- return translateMemcpy(CI, MIRBuilder);
+ case Intrinsic::memmove:
+ case Intrinsic::memset:
+ return translateMemfunc(CI, MIRBuilder, ID);
case Intrinsic::eh_typeid_for: {
GlobalValue *GV = ExtractTypeInfo(CI.getArgOperand(0));
unsigned Reg = getOrCreateVReg(CI);
@@ -546,7 +769,7 @@ bool IRTranslator::translateKnownIntrinsic(const CallInst &CI, Intrinsic::ID ID,
getStackGuard(getOrCreateVReg(CI), MIRBuilder);
return true;
case Intrinsic::stackprotector: {
- LLT PtrTy{*CI.getArgOperand(0)->getType(), *DL};
+ LLT PtrTy = getLLTForType(*CI.getArgOperand(0)->getType(), *DL);
unsigned GuardVal = MRI->createGenericVirtualRegister(PtrTy);
getStackGuard(GuardVal, MIRBuilder);
@@ -564,18 +787,41 @@ bool IRTranslator::translateKnownIntrinsic(const CallInst &CI, Intrinsic::ID ID,
return false;
}
+bool IRTranslator::translateInlineAsm(const CallInst &CI,
+ MachineIRBuilder &MIRBuilder) {
+ const InlineAsm &IA = cast<InlineAsm>(*CI.getCalledValue());
+ if (!IA.getConstraintString().empty())
+ return false;
+
+ unsigned ExtraInfo = 0;
+ if (IA.hasSideEffects())
+ ExtraInfo |= InlineAsm::Extra_HasSideEffects;
+ if (IA.getDialect() == InlineAsm::AD_Intel)
+ ExtraInfo |= InlineAsm::Extra_AsmDialect;
+
+ MIRBuilder.buildInstr(TargetOpcode::INLINEASM)
+ .addExternalSymbol(IA.getAsmString().c_str())
+ .addImm(ExtraInfo);
+
+ return true;
+}
+
bool IRTranslator::translateCall(const User &U, MachineIRBuilder &MIRBuilder) {
const CallInst &CI = cast<CallInst>(U);
auto TII = MF->getTarget().getIntrinsicInfo();
const Function *F = CI.getCalledFunction();
+ if (CI.isInlineAsm())
+ return translateInlineAsm(CI, MIRBuilder);
+
if (!F || !F->isIntrinsic()) {
unsigned Res = CI.getType()->isVoidTy() ? 0 : getOrCreateVReg(CI);
SmallVector<unsigned, 8> Args;
for (auto &Arg: CI.arg_operands())
Args.push_back(getOrCreateVReg(*Arg));
- return CLI->lowerCall(MIRBuilder, CI, Res, Args, [&]() {
+ MF->getFrameInfo().setHasCalls(true);
+ return CLI->lowerCall(MIRBuilder, &CI, Res, Args, [&]() {
return getOrCreateVReg(*CI.getCalledValue());
});
}
@@ -594,11 +840,26 @@ bool IRTranslator::translateCall(const User &U, MachineIRBuilder &MIRBuilder) {
MIRBuilder.buildIntrinsic(ID, Res, !CI.doesNotAccessMemory());
for (auto &Arg : CI.arg_operands()) {
- if (ConstantInt *CI = dyn_cast<ConstantInt>(Arg))
- MIB.addImm(CI->getSExtValue());
- else
- MIB.addUse(getOrCreateVReg(*Arg));
+ // Some intrinsics take metadata parameters. Reject them.
+ if (isa<MetadataAsValue>(Arg))
+ return false;
+ MIB.addUse(getOrCreateVReg(*Arg));
+ }
+
+ // Add a MachineMemOperand if it is a target mem intrinsic.
+ const TargetLowering &TLI = *MF->getSubtarget().getTargetLowering();
+ TargetLowering::IntrinsicInfo Info;
+ // TODO: Add a GlobalISel version of getTgtMemIntrinsic.
+ if (TLI.getTgtMemIntrinsic(Info, CI, ID)) {
+ MachineMemOperand::Flags Flags =
+ Info.vol ? MachineMemOperand::MOVolatile : MachineMemOperand::MONone;
+ Flags |=
+ Info.readMem ? MachineMemOperand::MOLoad : MachineMemOperand::MOStore;
+ uint64_t Size = Info.memVT.getSizeInBits() >> 3;
+ MIB.addMemOperand(MF->getMachineMemOperand(MachinePointerInfo(Info.ptrVal),
+ Flags, Size, Info.align));
}
+
return true;
}
@@ -610,7 +871,7 @@ bool IRTranslator::translateInvoke(const User &U,
const BasicBlock *ReturnBB = I.getSuccessor(0);
const BasicBlock *EHPadBB = I.getSuccessor(1);
- const Value *Callee(I.getCalledValue());
+ const Value *Callee = I.getCalledValue();
const Function *Fn = dyn_cast<Function>(Callee);
if (isa<InlineAsm>(Callee))
return false;
@@ -627,30 +888,30 @@ bool IRTranslator::translateInvoke(const User &U,
if (!isa<LandingPadInst>(EHPadBB->front()))
return false;
-
// Emit the actual call, bracketed by EH_LABELs so that the MF knows about
// the region covered by the try.
MCSymbol *BeginSymbol = Context.createTempSymbol();
MIRBuilder.buildInstr(TargetOpcode::EH_LABEL).addSym(BeginSymbol);
unsigned Res = I.getType()->isVoidTy() ? 0 : getOrCreateVReg(I);
- SmallVector<CallLowering::ArgInfo, 8> Args;
+ SmallVector<unsigned, 8> Args;
for (auto &Arg: I.arg_operands())
- Args.emplace_back(getOrCreateVReg(*Arg), Arg->getType());
+ Args.push_back(getOrCreateVReg(*Arg));
- if (!CLI->lowerCall(MIRBuilder, MachineOperand::CreateGA(Fn, 0),
- CallLowering::ArgInfo(Res, I.getType()), Args))
+ if (!CLI->lowerCall(MIRBuilder, &I, Res, Args,
+ [&]() { return getOrCreateVReg(*I.getCalledValue()); }))
return false;
MCSymbol *EndSymbol = Context.createTempSymbol();
MIRBuilder.buildInstr(TargetOpcode::EH_LABEL).addSym(EndSymbol);
// FIXME: track probabilities.
- MachineBasicBlock &EHPadMBB = getOrCreateBB(*EHPadBB),
- &ReturnMBB = getOrCreateBB(*ReturnBB);
+ MachineBasicBlock &EHPadMBB = getMBB(*EHPadBB),
+ &ReturnMBB = getMBB(*ReturnBB);
MF->addInvoke(&EHPadMBB, BeginSymbol, EndSymbol);
MIRBuilder.getMBB().addSuccessor(&ReturnMBB);
MIRBuilder.getMBB().addSuccessor(&EHPadMBB);
+ MIRBuilder.buildBr(ReturnMBB);
return true;
}
@@ -684,37 +945,161 @@ bool IRTranslator::translateLandingPad(const User &U,
MIRBuilder.buildInstr(TargetOpcode::EH_LABEL)
.addSym(MF->addLandingPad(&MBB));
+ LLT Ty = getLLTForType(*LP.getType(), *DL);
+ unsigned Undef = MRI->createGenericVirtualRegister(Ty);
+ MIRBuilder.buildUndef(Undef);
+
+ SmallVector<LLT, 2> Tys;
+ for (Type *Ty : cast<StructType>(LP.getType())->elements())
+ Tys.push_back(getLLTForType(*Ty, *DL));
+ assert(Tys.size() == 2 && "Only two-valued landingpads are supported");
+
// Mark exception register as live in.
- SmallVector<unsigned, 2> Regs;
- SmallVector<uint64_t, 2> Offsets;
- LLT p0 = LLT::pointer(0, DL->getPointerSizeInBits());
- if (unsigned Reg = TLI.getExceptionPointerRegister(PersonalityFn)) {
- unsigned VReg = MRI->createGenericVirtualRegister(p0);
- MIRBuilder.buildCopy(VReg, Reg);
- Regs.push_back(VReg);
- Offsets.push_back(0);
+ unsigned ExceptionReg = TLI.getExceptionPointerRegister(PersonalityFn);
+ if (!ExceptionReg)
+ return false;
+
+ MBB.addLiveIn(ExceptionReg);
+ unsigned VReg = MRI->createGenericVirtualRegister(Tys[0]),
+ Tmp = MRI->createGenericVirtualRegister(Ty);
+ MIRBuilder.buildCopy(VReg, ExceptionReg);
+ MIRBuilder.buildInsert(Tmp, Undef, VReg, 0);
+
+ unsigned SelectorReg = TLI.getExceptionSelectorRegister(PersonalityFn);
+ if (!SelectorReg)
+ return false;
+
+ MBB.addLiveIn(SelectorReg);
+
+ // N.b. the exception selector register always has pointer type and may not
+ // match the actual IR-level type in the landingpad so an extra cast is
+ // needed.
+ unsigned PtrVReg = MRI->createGenericVirtualRegister(Tys[0]);
+ MIRBuilder.buildCopy(PtrVReg, SelectorReg);
+
+ VReg = MRI->createGenericVirtualRegister(Tys[1]);
+ MIRBuilder.buildInstr(TargetOpcode::G_PTRTOINT).addDef(VReg).addUse(PtrVReg);
+ MIRBuilder.buildInsert(getOrCreateVReg(LP), Tmp, VReg,
+ Tys[0].getSizeInBits());
+ return true;
+}
+
+bool IRTranslator::translateAlloca(const User &U,
+ MachineIRBuilder &MIRBuilder) {
+ auto &AI = cast<AllocaInst>(U);
+
+ if (AI.isStaticAlloca()) {
+ unsigned Res = getOrCreateVReg(AI);
+ int FI = getOrCreateFrameIndex(AI);
+ MIRBuilder.buildFrameIndex(Res, FI);
+ return true;
}
- if (unsigned Reg = TLI.getExceptionSelectorRegister(PersonalityFn)) {
- unsigned VReg = MRI->createGenericVirtualRegister(p0);
- MIRBuilder.buildCopy(VReg, Reg);
- Regs.push_back(VReg);
- Offsets.push_back(p0.getSizeInBits());
+ // Now we're in the harder dynamic case.
+ Type *Ty = AI.getAllocatedType();
+ unsigned Align =
+ std::max((unsigned)DL->getPrefTypeAlignment(Ty), AI.getAlignment());
+
+ unsigned NumElts = getOrCreateVReg(*AI.getArraySize());
+
+ Type *IntPtrIRTy = DL->getIntPtrType(AI.getType());
+ LLT IntPtrTy = getLLTForType(*IntPtrIRTy, *DL);
+ if (MRI->getType(NumElts) != IntPtrTy) {
+ unsigned ExtElts = MRI->createGenericVirtualRegister(IntPtrTy);
+ MIRBuilder.buildZExtOrTrunc(ExtElts, NumElts);
+ NumElts = ExtElts;
+ }
+
+ unsigned AllocSize = MRI->createGenericVirtualRegister(IntPtrTy);
+ unsigned TySize =
+ getOrCreateVReg(*ConstantInt::get(IntPtrIRTy, -DL->getTypeAllocSize(Ty)));
+ MIRBuilder.buildMul(AllocSize, NumElts, TySize);
+
+ LLT PtrTy = getLLTForType(*AI.getType(), *DL);
+ auto &TLI = *MF->getSubtarget().getTargetLowering();
+ unsigned SPReg = TLI.getStackPointerRegisterToSaveRestore();
+
+ unsigned SPTmp = MRI->createGenericVirtualRegister(PtrTy);
+ MIRBuilder.buildCopy(SPTmp, SPReg);
+
+ unsigned AllocTmp = MRI->createGenericVirtualRegister(PtrTy);
+ MIRBuilder.buildGEP(AllocTmp, SPTmp, AllocSize);
+
+ // Handle alignment. We have to realign if the allocation granule was smaller
+ // than stack alignment, or the specific alloca requires more than stack
+ // alignment.
+ unsigned StackAlign =
+ MF->getSubtarget().getFrameLowering()->getStackAlignment();
+ Align = std::max(Align, StackAlign);
+ if (Align > StackAlign || DL->getTypeAllocSize(Ty) % StackAlign != 0) {
+ // Round the size of the allocation up to the stack alignment size
+ // by add SA-1 to the size. This doesn't overflow because we're computing
+ // an address inside an alloca.
+ unsigned AlignedAlloc = MRI->createGenericVirtualRegister(PtrTy);
+ MIRBuilder.buildPtrMask(AlignedAlloc, AllocTmp, Log2_32(Align));
+ AllocTmp = AlignedAlloc;
}
- MIRBuilder.buildSequence(getOrCreateVReg(LP), Regs, Offsets);
+ MIRBuilder.buildCopy(SPReg, AllocTmp);
+ MIRBuilder.buildCopy(getOrCreateVReg(AI), AllocTmp);
+
+ MF->getFrameInfo().CreateVariableSizedObject(Align ? Align : 1, &AI);
+ assert(MF->getFrameInfo().hasVarSizedObjects());
return true;
}
-bool IRTranslator::translateStaticAlloca(const AllocaInst &AI,
- MachineIRBuilder &MIRBuilder) {
- if (!TPC->isGlobalISelAbortEnabled() && !AI.isStaticAlloca())
- return false;
+bool IRTranslator::translateVAArg(const User &U, MachineIRBuilder &MIRBuilder) {
+ // FIXME: We may need more info about the type. Because of how LLT works,
+ // we're completely discarding the i64/double distinction here (amongst
+ // others). Fortunately the ABIs I know of where that matters don't use va_arg
+ // anyway but that's not guaranteed.
+ MIRBuilder.buildInstr(TargetOpcode::G_VAARG)
+ .addDef(getOrCreateVReg(U))
+ .addUse(getOrCreateVReg(*U.getOperand(0)))
+ .addImm(DL->getABITypeAlignment(U.getType()));
+ return true;
+}
- assert(AI.isStaticAlloca() && "only handle static allocas now");
- unsigned Res = getOrCreateVReg(AI);
- int FI = getOrCreateFrameIndex(AI);
- MIRBuilder.buildFrameIndex(Res, FI);
+bool IRTranslator::translateInsertElement(const User &U,
+ MachineIRBuilder &MIRBuilder) {
+ // If it is a <1 x Ty> vector, use the scalar as it is
+ // not a legal vector type in LLT.
+ if (U.getType()->getVectorNumElements() == 1) {
+ unsigned Elt = getOrCreateVReg(*U.getOperand(1));
+ ValToVReg[&U] = Elt;
+ return true;
+ }
+ unsigned Res = getOrCreateVReg(U);
+ unsigned Val = getOrCreateVReg(*U.getOperand(0));
+ unsigned Elt = getOrCreateVReg(*U.getOperand(1));
+ unsigned Idx = getOrCreateVReg(*U.getOperand(2));
+ MIRBuilder.buildInsertVectorElement(Res, Val, Elt, Idx);
+ return true;
+}
+
+bool IRTranslator::translateExtractElement(const User &U,
+ MachineIRBuilder &MIRBuilder) {
+ // If it is a <1 x Ty> vector, use the scalar as it is
+ // not a legal vector type in LLT.
+ if (U.getOperand(0)->getType()->getVectorNumElements() == 1) {
+ unsigned Elt = getOrCreateVReg(*U.getOperand(0));
+ ValToVReg[&U] = Elt;
+ return true;
+ }
+ unsigned Res = getOrCreateVReg(U);
+ unsigned Val = getOrCreateVReg(*U.getOperand(0));
+ unsigned Idx = getOrCreateVReg(*U.getOperand(1));
+ MIRBuilder.buildExtractVectorElement(Res, Val, Idx);
+ return true;
+}
+
+bool IRTranslator::translateShuffleVector(const User &U,
+ MachineIRBuilder &MIRBuilder) {
+ MIRBuilder.buildInstr(TargetOpcode::G_SHUFFLE_VECTOR)
+ .addDef(getOrCreateVReg(U))
+ .addUse(getOrCreateVReg(*U.getOperand(0)))
+ .addUse(getOrCreateVReg(*U.getOperand(1)))
+ .addUse(getOrCreateVReg(*U.getOperand(2)));
return true;
}
@@ -736,11 +1121,21 @@ void IRTranslator::finishPendingPhis() {
// won't create extra control flow here, otherwise we need to find the
// dominating predecessor here (or perhaps force the weirder IRTranslators
// to provide a simple boundary).
+ SmallSet<const BasicBlock *, 4> HandledPreds;
+
for (unsigned i = 0; i < PI->getNumIncomingValues(); ++i) {
- assert(BBToMBB[PI->getIncomingBlock(i)]->isSuccessor(MIB->getParent()) &&
- "I appear to have misunderstood Machine PHIs");
- MIB.addUse(getOrCreateVReg(*PI->getIncomingValue(i)));
- MIB.addMBB(BBToMBB[PI->getIncomingBlock(i)]);
+ auto IRPred = PI->getIncomingBlock(i);
+ if (HandledPreds.count(IRPred))
+ continue;
+
+ HandledPreds.insert(IRPred);
+ unsigned ValReg = getOrCreateVReg(*PI->getIncomingValue(i));
+ for (auto Pred : getMachinePredBBs({IRPred, PI->getParent()})) {
+ assert(Pred->isSuccessor(MIB->getParent()) &&
+ "incorrect CFG at MachineBasicBlock level");
+ MIB.addUse(ValReg);
+ MIB.addMBB(Pred);
+ }
}
}
}
@@ -752,9 +1147,7 @@ bool IRTranslator::translate(const Instruction &Inst) {
case Instruction::OPCODE: return translate##OPCODE(Inst, CurBuilder);
#include "llvm/IR/Instruction.def"
default:
- if (!TPC->isGlobalISelAbortEnabled())
- return false;
- llvm_unreachable("unknown opcode");
+ return false;
}
}
@@ -764,25 +1157,68 @@ bool IRTranslator::translate(const Constant &C, unsigned Reg) {
else if (auto CF = dyn_cast<ConstantFP>(&C))
EntryBuilder.buildFConstant(Reg, *CF);
else if (isa<UndefValue>(C))
- EntryBuilder.buildInstr(TargetOpcode::IMPLICIT_DEF).addDef(Reg);
+ EntryBuilder.buildUndef(Reg);
else if (isa<ConstantPointerNull>(C))
EntryBuilder.buildConstant(Reg, 0);
else if (auto GV = dyn_cast<GlobalValue>(&C))
EntryBuilder.buildGlobalValue(Reg, GV);
- else if (auto CE = dyn_cast<ConstantExpr>(&C)) {
+ else if (auto CAZ = dyn_cast<ConstantAggregateZero>(&C)) {
+ if (!CAZ->getType()->isVectorTy())
+ return false;
+ // Return the scalar if it is a <1 x Ty> vector.
+ if (CAZ->getNumElements() == 1)
+ return translate(*CAZ->getElementValue(0u), Reg);
+ std::vector<unsigned> Ops;
+ for (unsigned i = 0; i < CAZ->getNumElements(); ++i) {
+ Constant &Elt = *CAZ->getElementValue(i);
+ Ops.push_back(getOrCreateVReg(Elt));
+ }
+ EntryBuilder.buildMerge(Reg, Ops);
+ } else if (auto CV = dyn_cast<ConstantDataVector>(&C)) {
+ // Return the scalar if it is a <1 x Ty> vector.
+ if (CV->getNumElements() == 1)
+ return translate(*CV->getElementAsConstant(0), Reg);
+ std::vector<unsigned> Ops;
+ for (unsigned i = 0; i < CV->getNumElements(); ++i) {
+ Constant &Elt = *CV->getElementAsConstant(i);
+ Ops.push_back(getOrCreateVReg(Elt));
+ }
+ EntryBuilder.buildMerge(Reg, Ops);
+ } else if (auto CE = dyn_cast<ConstantExpr>(&C)) {
switch(CE->getOpcode()) {
#define HANDLE_INST(NUM, OPCODE, CLASS) \
case Instruction::OPCODE: return translate##OPCODE(*CE, EntryBuilder);
#include "llvm/IR/Instruction.def"
default:
- if (!TPC->isGlobalISelAbortEnabled())
- return false;
- llvm_unreachable("unknown opcode");
+ return false;
+ }
+ } else if (auto CS = dyn_cast<ConstantStruct>(&C)) {
+ // Return the element if it is a single element ConstantStruct.
+ if (CS->getNumOperands() == 1) {
+ unsigned EltReg = getOrCreateVReg(*CS->getOperand(0));
+ EntryBuilder.buildCast(Reg, EltReg);
+ return true;
+ }
+ SmallVector<unsigned, 4> Ops;
+ SmallVector<uint64_t, 4> Indices;
+ uint64_t Offset = 0;
+ for (unsigned i = 0; i < CS->getNumOperands(); ++i) {
+ unsigned OpReg = getOrCreateVReg(*CS->getOperand(i));
+ Ops.push_back(OpReg);
+ Indices.push_back(Offset);
+ Offset += MRI->getType(OpReg).getSizeInBits();
+ }
+ EntryBuilder.buildSequence(Reg, Ops, Indices);
+ } else if (auto CV = dyn_cast<ConstantVector>(&C)) {
+ if (CV->getNumOperands() == 1)
+ return translate(*CV->getOperand(0), Reg);
+ SmallVector<unsigned, 4> Ops;
+ for (unsigned i = 0; i < CV->getNumOperands(); ++i) {
+ Ops.push_back(getOrCreateVReg(*CV->getOperand(i)));
}
- } else if (!TPC->isGlobalISelAbortEnabled())
+ EntryBuilder.buildMerge(Reg, Ops);
+ } else
return false;
- else
- llvm_unreachable("unhandled constant kind");
return true;
}
@@ -793,7 +1229,12 @@ void IRTranslator::finalizeFunction() {
PendingPHIs.clear();
ValToVReg.clear();
FrameIndices.clear();
- Constants.clear();
+ MachinePreds.clear();
+ // MachineIRBuilder::DebugLoc can outlive the DILocation it holds. Clear it
+ // to avoid accessing free’d memory (in runOnMachineFunction) and to avoid
+ // destroying it twice (in ~IRTranslator() and ~LLVMContext())
+ EntryBuilder = MachineIRBuilder();
+ CurBuilder = MachineIRBuilder();
}
bool IRTranslator::runOnMachineFunction(MachineFunction &CurMF) {
@@ -807,85 +1248,97 @@ bool IRTranslator::runOnMachineFunction(MachineFunction &CurMF) {
MRI = &MF->getRegInfo();
DL = &F.getParent()->getDataLayout();
TPC = &getAnalysis<TargetPassConfig>();
+ ORE = llvm::make_unique<OptimizationRemarkEmitter>(&F);
assert(PendingPHIs.empty() && "stale PHIs");
- // Setup a separate basic-block for the arguments and constants, falling
- // through to the IR-level Function's entry block.
+ // Release the per-function state when we return, whether we succeeded or not.
+ auto FinalizeOnReturn = make_scope_exit([this]() { finalizeFunction(); });
+
+ // Setup a separate basic-block for the arguments and constants
MachineBasicBlock *EntryBB = MF->CreateMachineBasicBlock();
MF->push_back(EntryBB);
- EntryBB->addSuccessor(&getOrCreateBB(F.front()));
EntryBuilder.setMBB(*EntryBB);
+ // Create all blocks, in IR order, to preserve the layout.
+ for (const BasicBlock &BB: F) {
+ auto *&MBB = BBToMBB[&BB];
+
+ MBB = MF->CreateMachineBasicBlock(&BB);
+ MF->push_back(MBB);
+
+ if (BB.hasAddressTaken())
+ MBB->setHasAddressTaken();
+ }
+
+ // Make our arguments/constants entry block fallthrough to the IR entry block.
+ EntryBB->addSuccessor(&getMBB(F.front()));
+
// Lower the actual args into this basic block.
SmallVector<unsigned, 8> VRegArgs;
for (const Argument &Arg: F.args())
VRegArgs.push_back(getOrCreateVReg(Arg));
- bool Succeeded = CLI->lowerFormalArguments(EntryBuilder, F, VRegArgs);
- if (!Succeeded) {
- if (!TPC->isGlobalISelAbortEnabled()) {
- MF->getProperties().set(
- MachineFunctionProperties::Property::FailedISel);
- finalizeFunction();
- return false;
- }
- report_fatal_error("Unable to lower arguments");
+ if (!CLI->lowerFormalArguments(EntryBuilder, F, VRegArgs)) {
+ OptimizationRemarkMissed R("gisel-irtranslator", "GISelFailure",
+ MF->getFunction()->getSubprogram(),
+ &MF->getFunction()->getEntryBlock());
+ R << "unable to lower arguments: " << ore::NV("Prototype", F.getType());
+ reportTranslationError(*MF, *TPC, *ORE, R);
+ return false;
}
// And translate the function!
for (const BasicBlock &BB: F) {
- MachineBasicBlock &MBB = getOrCreateBB(BB);
+ MachineBasicBlock &MBB = getMBB(BB);
// Set the insertion point of all the following translations to
// the end of this basic block.
CurBuilder.setMBB(MBB);
for (const Instruction &Inst: BB) {
- Succeeded &= translate(Inst);
- if (!Succeeded) {
- if (TPC->isGlobalISelAbortEnabled())
- reportTranslationError(Inst, "unable to translate instruction");
- MF->getProperties().set(
- MachineFunctionProperties::Property::FailedISel);
- break;
- }
- }
- }
-
- if (Succeeded) {
- finishPendingPhis();
-
- // Now that the MachineFrameInfo has been configured, no further changes to
- // the reserved registers are possible.
- MRI->freezeReservedRegs(*MF);
-
- // Merge the argument lowering and constants block with its single
- // successor, the LLVM-IR entry block. We want the basic block to
- // be maximal.
- assert(EntryBB->succ_size() == 1 &&
- "Custom BB used for lowering should have only one successor");
- // Get the successor of the current entry block.
- MachineBasicBlock &NewEntryBB = **EntryBB->succ_begin();
- assert(NewEntryBB.pred_size() == 1 &&
- "LLVM-IR entry block has a predecessor!?");
- // Move all the instruction from the current entry block to the
- // new entry block.
- NewEntryBB.splice(NewEntryBB.begin(), EntryBB, EntryBB->begin(),
- EntryBB->end());
-
- // Update the live-in information for the new entry block.
- for (const MachineBasicBlock::RegisterMaskPair &LiveIn : EntryBB->liveins())
- NewEntryBB.addLiveIn(LiveIn);
- NewEntryBB.sortUniqueLiveIns();
+ if (translate(Inst))
+ continue;
- // Get rid of the now empty basic block.
- EntryBB->removeSuccessor(&NewEntryBB);
- MF->remove(EntryBB);
+ std::string InstStrStorage;
+ raw_string_ostream InstStr(InstStrStorage);
+ InstStr << Inst;
- assert(&MF->front() == &NewEntryBB &&
- "New entry wasn't next in the list of basic block!");
+ OptimizationRemarkMissed R("gisel-irtranslator", "GISelFailure",
+ Inst.getDebugLoc(), &BB);
+ R << "unable to translate instruction: " << ore::NV("Opcode", &Inst)
+ << ": '" << InstStr.str() << "'";
+ reportTranslationError(*MF, *TPC, *ORE, R);
+ return false;
+ }
}
- finalizeFunction();
+ finishPendingPhis();
+
+ // Merge the argument lowering and constants block with its single
+ // successor, the LLVM-IR entry block. We want the basic block to
+ // be maximal.
+ assert(EntryBB->succ_size() == 1 &&
+ "Custom BB used for lowering should have only one successor");
+ // Get the successor of the current entry block.
+ MachineBasicBlock &NewEntryBB = **EntryBB->succ_begin();
+ assert(NewEntryBB.pred_size() == 1 &&
+ "LLVM-IR entry block has a predecessor!?");
+ // Move all the instruction from the current entry block to the
+ // new entry block.
+ NewEntryBB.splice(NewEntryBB.begin(), EntryBB, EntryBB->begin(),
+ EntryBB->end());
+
+ // Update the live-in information for the new entry block.
+ for (const MachineBasicBlock::RegisterMaskPair &LiveIn : EntryBB->liveins())
+ NewEntryBB.addLiveIn(LiveIn);
+ NewEntryBB.sortUniqueLiveIns();
+
+ // Get rid of the now empty basic block.
+ EntryBB->removeSuccessor(&NewEntryBB);
+ MF->remove(EntryBB);
+ MF->DeleteMachineBasicBlock(EntryBB);
+
+ assert(&MF->front() == &NewEntryBB &&
+ "New entry wasn't next in the list of basic block!");
return false;
}
OpenPOWER on IntegriCloud