summaryrefslogtreecommitdiffstats
path: root/contrib/llvm/tools/clang/lib/CodeGen/CGStmt.cpp
diff options
context:
space:
mode:
authordim <dim@FreeBSD.org>2016-12-26 20:36:37 +0000
committerdim <dim@FreeBSD.org>2016-12-26 20:36:37 +0000
commit06210ae42d418d50d8d9365d5c9419308ae9e7ee (patch)
treeab60b4cdd6e430dda1f292a46a77ddb744723f31 /contrib/llvm/tools/clang/lib/CodeGen/CGStmt.cpp
parent2dd166267f53df1c3748b4325d294b9b839de74b (diff)
downloadFreeBSD-src-06210ae42d418d50d8d9365d5c9419308ae9e7ee.zip
FreeBSD-src-06210ae42d418d50d8d9365d5c9419308ae9e7ee.tar.gz
MFC r309124:
Upgrade our copies of clang, llvm, lldb, compiler-rt and libc++ to 3.9.0 release, and add lld 3.9.0. Also completely revamp the build system for clang, llvm, lldb and their related tools. Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11 support to build; see UPDATING for more information. Release notes for llvm, clang and lld are available here: <http://llvm.org/releases/3.9.0/docs/ReleaseNotes.html> <http://llvm.org/releases/3.9.0/tools/clang/docs/ReleaseNotes.html> <http://llvm.org/releases/3.9.0/tools/lld/docs/ReleaseNotes.html> Thanks to Ed Maste, Bryan Drewery, Andrew Turner, Antoine Brodin and Jan Beich for their help. Relnotes: yes MFC r309147: Pull in r282174 from upstream llvm trunk (by Krzysztof Parzyszek): [PPC] Set SP after loading data from stack frame, if no red zone is present Follow-up to r280705: Make sure that the SP is only restored after all data is loaded from the stack frame, if there is no red zone. This completes the fix for https://llvm.org/bugs/show_bug.cgi?id=26519. Differential Revision: https://reviews.llvm.org/D24466 Reported by: Mark Millard PR: 214433 MFC r309149: Pull in r283060 from upstream llvm trunk (by Hal Finkel): [PowerPC] Refactor soft-float support, and enable PPC64 soft float This change enables soft-float for PowerPC64, and also makes soft-float disable all vector instruction sets for both 32-bit and 64-bit modes. This latter part is necessary because the PPC backend canonicalizes many Altivec vector types to floating-point types, and so soft-float breaks scalarization support for many operations. Both for embedded targets and for operating-system kernels desiring soft-float support, it seems reasonable that disabling hardware floating-point also disables vector instructions (embedded targets without hardware floating point support are unlikely to have Altivec, etc. and operating system kernels desiring not to use floating-point registers to lower syscall cost are unlikely to want to use vector registers either). If someone needs this to work, we'll need to change the fact that we promote many Altivec operations to act on v4f32. To make it possible to disable Altivec when soft-float is enabled, hardware floating-point support needs to be expressed as a positive feature, like the others, and not a negative feature, because target features cannot have dependencies on the disabling of some other feature. So +soft-float has now become -hard-float. Fixes PR26970. Pull in r283061 from upstream clang trunk (by Hal Finkel): [PowerPC] Enable soft-float for PPC64, and +soft-float -> -hard-float Enable soft-float support on PPC64, as the backend now supports it. Also, the backend now uses -hard-float instead of +soft-float, so set the target features accordingly. Fixes PR26970. Reported by: Mark Millard PR: 214433 MFC r309212: Add a few missed clang 3.9.0 files to OptionalObsoleteFiles. MFC r309262: Fix packaging for clang, lldb and lld 3.9.0 During the upgrade of clang/llvm etc to 3.9.0 in r309124, the PACKAGE directive in the usr.bin/clang/*.mk files got dropped accidentally. Restore it, with a few minor changes and additions: * Correct license in clang.ucl to NCSA * Add PACKAGE=clang for clang and most of the "ll" tools * Put lldb in its own package * Put lld in its own package Reviewed by: gjb, jmallett Differential Revision: https://reviews.freebsd.org/D8666 MFC r309656: During the bootstrap phase, when building the minimal llvm library on PowerPC, add lib/Support/Atomic.cpp. This is needed because upstream llvm revision r271821 disabled the use of std::call_once, which causes some fallback functions from Atomic.cpp to be used instead. Reported by: Mark Millard PR: 214902 MFC r309835: Tentatively apply https://reviews.llvm.org/D18730 to work around gcc PR 70528 (bogus error: constructor required before non-static data member). This should fix buildworld with the external gcc package. Reported by: https://jenkins.freebsd.org/job/FreeBSD_HEAD_amd64_gcc/ MFC r310194: Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to 3.9.1 release. Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11 support to build; see UPDATING for more information. Release notes for llvm, clang and lld will be available here: <http://releases.llvm.org/3.9.1/docs/ReleaseNotes.html> <http://releases.llvm.org/3.9.1/tools/clang/docs/ReleaseNotes.html> <http://releases.llvm.org/3.9.1/tools/lld/docs/ReleaseNotes.html> Relnotes: yes
Diffstat (limited to 'contrib/llvm/tools/clang/lib/CodeGen/CGStmt.cpp')
-rw-r--r--contrib/llvm/tools/clang/lib/CodeGen/CGStmt.cpp139
1 files changed, 114 insertions, 25 deletions
diff --git a/contrib/llvm/tools/clang/lib/CodeGen/CGStmt.cpp b/contrib/llvm/tools/clang/lib/CodeGen/CGStmt.cpp
index cc4fa2e..d815863 100644
--- a/contrib/llvm/tools/clang/lib/CodeGen/CGStmt.cpp
+++ b/contrib/llvm/tools/clang/lib/CodeGen/CGStmt.cpp
@@ -256,15 +256,45 @@ void CodeGenFunction::EmitStmt(const Stmt *S) {
case Stmt::OMPTargetDataDirectiveClass:
EmitOMPTargetDataDirective(cast<OMPTargetDataDirective>(*S));
break;
+ case Stmt::OMPTargetEnterDataDirectiveClass:
+ EmitOMPTargetEnterDataDirective(cast<OMPTargetEnterDataDirective>(*S));
+ break;
+ case Stmt::OMPTargetExitDataDirectiveClass:
+ EmitOMPTargetExitDataDirective(cast<OMPTargetExitDataDirective>(*S));
+ break;
+ case Stmt::OMPTargetParallelDirectiveClass:
+ EmitOMPTargetParallelDirective(cast<OMPTargetParallelDirective>(*S));
+ break;
+ case Stmt::OMPTargetParallelForDirectiveClass:
+ EmitOMPTargetParallelForDirective(cast<OMPTargetParallelForDirective>(*S));
+ break;
case Stmt::OMPTaskLoopDirectiveClass:
EmitOMPTaskLoopDirective(cast<OMPTaskLoopDirective>(*S));
break;
case Stmt::OMPTaskLoopSimdDirectiveClass:
EmitOMPTaskLoopSimdDirective(cast<OMPTaskLoopSimdDirective>(*S));
break;
-case Stmt::OMPDistributeDirectiveClass:
+ case Stmt::OMPDistributeDirectiveClass:
EmitOMPDistributeDirective(cast<OMPDistributeDirective>(*S));
- break;
+ break;
+ case Stmt::OMPTargetUpdateDirectiveClass:
+ EmitOMPTargetUpdateDirective(cast<OMPTargetUpdateDirective>(*S));
+ break;
+ case Stmt::OMPDistributeParallelForDirectiveClass:
+ EmitOMPDistributeParallelForDirective(
+ cast<OMPDistributeParallelForDirective>(*S));
+ break;
+ case Stmt::OMPDistributeParallelForSimdDirectiveClass:
+ EmitOMPDistributeParallelForSimdDirective(
+ cast<OMPDistributeParallelForSimdDirective>(*S));
+ break;
+ case Stmt::OMPDistributeSimdDirectiveClass:
+ EmitOMPDistributeSimdDirective(cast<OMPDistributeSimdDirective>(*S));
+ break;
+ case Stmt::OMPTargetParallelForSimdDirectiveClass:
+ EmitOMPTargetParallelForSimdDirective(
+ cast<OMPTargetParallelForSimdDirective>(*S));
+ break;
}
}
@@ -542,13 +572,17 @@ void CodeGenFunction::EmitIfStmt(const IfStmt &S) {
// unequal to 0. The condition must be a scalar type.
LexicalScope ConditionScope(*this, S.getCond()->getSourceRange());
+ if (S.getInit())
+ EmitStmt(S.getInit());
+
if (S.getConditionVariable())
EmitAutoVarDecl(*S.getConditionVariable());
// If the condition constant folds and can be elided, try to avoid emitting
// the condition and the dead arm of the if/else.
bool CondConstant;
- if (ConstantFoldsToSimpleInteger(S.getCond(), CondConstant)) {
+ if (ConstantFoldsToSimpleInteger(S.getCond(), CondConstant,
+ S.isConstexpr())) {
// Figure out which block (then or else) is executed.
const Stmt *Executed = S.getThen();
const Stmt *Skipped = S.getElse();
@@ -557,7 +591,7 @@ void CodeGenFunction::EmitIfStmt(const IfStmt &S) {
// If the skipped block has no labels in it, just emit the executed block.
// This avoids emitting dead code and simplifies the CFG substantially.
- if (!ContainsLabel(Skipped)) {
+ if (S.isConstexpr() || !ContainsLabel(Skipped)) {
if (CondConstant)
incrementProfileCounter(&S);
if (Executed) {
@@ -617,7 +651,8 @@ void CodeGenFunction::EmitWhileStmt(const WhileStmt &S,
JumpDest LoopHeader = getJumpDestInCurrentScope("while.cond");
EmitBlock(LoopHeader.getBlock());
- LoopStack.push(LoopHeader.getBlock(), CGM.getContext(), WhileAttrs);
+ LoopStack.push(LoopHeader.getBlock(), CGM.getContext(), WhileAttrs,
+ Builder.getCurrentDebugLocation());
// Create an exit block for when the condition fails, which will
// also become the break target.
@@ -708,7 +743,8 @@ void CodeGenFunction::EmitDoStmt(const DoStmt &S,
// Emit the body of the loop.
llvm::BasicBlock *LoopBody = createBasicBlock("do.body");
- LoopStack.push(LoopBody, CGM.getContext(), DoAttrs);
+ LoopStack.push(LoopBody, CGM.getContext(), DoAttrs,
+ Builder.getCurrentDebugLocation());
EmitBlockWithFallThrough(LoopBody, &S);
{
@@ -760,6 +796,8 @@ void CodeGenFunction::EmitForStmt(const ForStmt &S,
LexicalScope ForScope(*this, S.getSourceRange());
+ llvm::DebugLoc DL = Builder.getCurrentDebugLocation();
+
// Evaluate the first part before the loop.
if (S.getInit())
EmitStmt(S.getInit());
@@ -771,7 +809,7 @@ void CodeGenFunction::EmitForStmt(const ForStmt &S,
llvm::BasicBlock *CondBlock = Continue.getBlock();
EmitBlock(CondBlock);
- LoopStack.push(CondBlock, CGM.getContext(), ForAttrs);
+ LoopStack.push(CondBlock, CGM.getContext(), ForAttrs, DL);
// If the for loop doesn't have an increment we can just use the
// condition as the continue block. Otherwise we'll need to create
@@ -856,9 +894,12 @@ CodeGenFunction::EmitCXXForRangeStmt(const CXXForRangeStmt &S,
LexicalScope ForScope(*this, S.getSourceRange());
+ llvm::DebugLoc DL = Builder.getCurrentDebugLocation();
+
// Evaluate the first pieces before the loop.
EmitStmt(S.getRangeStmt());
- EmitStmt(S.getBeginEndStmt());
+ EmitStmt(S.getBeginStmt());
+ EmitStmt(S.getEndStmt());
// Start the loop with a block that tests the condition.
// If there's an increment, the continue scope will be overwritten
@@ -866,7 +907,7 @@ CodeGenFunction::EmitCXXForRangeStmt(const CXXForRangeStmt &S,
llvm::BasicBlock *CondBlock = createBasicBlock("for.cond");
EmitBlock(CondBlock);
- LoopStack.push(CondBlock, CGM.getContext(), ForAttrs);
+ LoopStack.push(CondBlock, CGM.getContext(), ForAttrs, DL);
// If there are any cleanups between here and the loop-exit scope,
// create a block to stage a loop exit along.
@@ -1147,7 +1188,7 @@ void CodeGenFunction::EmitCaseStmt(const CaseStmt &S) {
// If the body of the case is just a 'break', try to not emit an empty block.
// If we're profiling or we're not optimizing, leave the block in for better
// debug and coverage analysis.
- if (!CGM.getCodeGenOpts().ProfileInstrGenerate &&
+ if (!CGM.getCodeGenOpts().hasProfileClangInstr() &&
CGM.getCodeGenOpts().OptimizationLevel > 0 &&
isa<BreakStmt>(S.getSubStmt())) {
JumpDest Block = BreakContinueStack.back().BreakBlock;
@@ -1194,7 +1235,7 @@ void CodeGenFunction::EmitCaseStmt(const CaseStmt &S) {
if (SwitchWeights)
SwitchWeights->push_back(getProfileCount(NextCase));
- if (CGM.getCodeGenOpts().ProfileInstrGenerate) {
+ if (CGM.getCodeGenOpts().hasProfileClangInstr()) {
CaseDest = createBasicBlock("sw.bb");
EmitBlockWithFallThrough(CaseDest, &S);
}
@@ -1208,6 +1249,14 @@ void CodeGenFunction::EmitCaseStmt(const CaseStmt &S) {
}
void CodeGenFunction::EmitDefaultStmt(const DefaultStmt &S) {
+ // If there is no enclosing switch instance that we're aware of, then this
+ // default statement can be elided. This situation only happens when we've
+ // constant-folded the switch.
+ if (!SwitchInsn) {
+ EmitStmt(S.getSubStmt());
+ return;
+ }
+
llvm::BasicBlock *DefaultBlock = SwitchInsn->getDefaultDest();
assert(DefaultBlock->empty() &&
"EmitDefaultStmt: Default block already defined?");
@@ -1274,6 +1323,10 @@ static CSFC_Result CollectStatementsForCase(const Stmt *S,
// Handle this as two cases: we might be looking for the SwitchCase (if so
// the skipped statements must be skippable) or we might already have it.
CompoundStmt::const_body_iterator I = CS->body_begin(), E = CS->body_end();
+ bool StartedInLiveCode = FoundCase;
+ unsigned StartSize = ResultStmts.size();
+
+ // If we've not found the case yet, scan through looking for it.
if (Case) {
// Keep track of whether we see a skipped declaration. The code could be
// using the declaration even if it is skipped, so we can't optimize out
@@ -1283,7 +1336,7 @@ static CSFC_Result CollectStatementsForCase(const Stmt *S,
// If we're looking for the case, just see if we can skip each of the
// substatements.
for (; Case && I != E; ++I) {
- HadSkippedDecl |= isa<DeclStmt>(*I);
+ HadSkippedDecl |= CodeGenFunction::mightAddDeclToScope(*I);
switch (CollectStatementsForCase(*I, Case, FoundCase, ResultStmts)) {
case CSFC_Failure: return CSFC_Failure;
@@ -1319,11 +1372,19 @@ static CSFC_Result CollectStatementsForCase(const Stmt *S,
break;
}
}
+
+ if (!FoundCase)
+ return CSFC_Success;
+
+ assert(!HadSkippedDecl && "fallthrough after skipping decl");
}
// If we have statements in our range, then we know that the statements are
// live and need to be added to the set of statements we're tracking.
+ bool AnyDecls = false;
for (; I != E; ++I) {
+ AnyDecls |= CodeGenFunction::mightAddDeclToScope(*I);
+
switch (CollectStatementsForCase(*I, nullptr, FoundCase, ResultStmts)) {
case CSFC_Failure: return CSFC_Failure;
case CSFC_FallThrough:
@@ -1341,7 +1402,24 @@ static CSFC_Result CollectStatementsForCase(const Stmt *S,
}
}
- return Case ? CSFC_Success : CSFC_FallThrough;
+ // If we're about to fall out of a scope without hitting a 'break;', we
+ // can't perform the optimization if there were any decls in that scope
+ // (we'd lose their end-of-lifetime).
+ if (AnyDecls) {
+ // If the entire compound statement was live, there's one more thing we
+ // can try before giving up: emit the whole thing as a single statement.
+ // We can do that unless the statement contains a 'break;'.
+ // FIXME: Such a break must be at the end of a construct within this one.
+ // We could emit this by just ignoring the BreakStmts entirely.
+ if (StartedInLiveCode && !CodeGenFunction::containsBreak(S)) {
+ ResultStmts.resize(StartSize);
+ ResultStmts.push_back(S);
+ } else {
+ return CSFC_Failure;
+ }
+ }
+
+ return CSFC_FallThrough;
}
// Okay, this is some other statement that we don't handle explicitly, like a
@@ -1438,6 +1516,9 @@ void CodeGenFunction::EmitSwitchStmt(const SwitchStmt &S) {
incrementProfileCounter(Case);
RunCleanupsScope ExecutedScope(*this);
+ if (S.getInit())
+ EmitStmt(S.getInit());
+
// Emit the condition variable if needed inside the entire cleanup scope
// used by this special case for constant folded switches.
if (S.getConditionVariable())
@@ -1465,6 +1546,10 @@ void CodeGenFunction::EmitSwitchStmt(const SwitchStmt &S) {
JumpDest SwitchExit = getJumpDestInCurrentScope("sw.epilog");
RunCleanupsScope ConditionScope(*this);
+
+ if (S.getInit())
+ EmitStmt(S.getInit());
+
if (S.getConditionVariable())
EmitAutoVarDecl(*S.getConditionVariable());
llvm::Value *CondV = EmitScalarExpr(S.getCond());
@@ -1537,16 +1622,13 @@ void CodeGenFunction::EmitSwitchStmt(const SwitchStmt &S) {
// If the switch has a condition wrapped by __builtin_unpredictable,
// create metadata that specifies that the switch is unpredictable.
// Don't bother if not optimizing because that metadata would not be used.
- if (CGM.getCodeGenOpts().OptimizationLevel != 0) {
- if (const CallExpr *Call = dyn_cast<CallExpr>(S.getCond())) {
- const Decl *TargetDecl = Call->getCalleeDecl();
- if (const FunctionDecl *FD = dyn_cast_or_null<FunctionDecl>(TargetDecl)) {
- if (FD->getBuiltinID() == Builtin::BI__builtin_unpredictable) {
- llvm::MDBuilder MDHelper(getLLVMContext());
- SwitchInsn->setMetadata(llvm::LLVMContext::MD_unpredictable,
- MDHelper.createUnpredictable());
- }
- }
+ auto *Call = dyn_cast<CallExpr>(S.getCond());
+ if (Call && CGM.getCodeGenOpts().OptimizationLevel != 0) {
+ auto *FD = dyn_cast_or_null<FunctionDecl>(Call->getCalleeDecl());
+ if (FD && FD->getBuiltinID() == Builtin::BI__builtin_unpredictable) {
+ llvm::MDBuilder MDHelper(getLLVMContext());
+ SwitchInsn->setMetadata(llvm::LLVMContext::MD_unpredictable,
+ MDHelper.createUnpredictable());
}
}
@@ -2035,6 +2117,14 @@ void CodeGenFunction::EmitAsmStmt(const AsmStmt &S) {
llvm::ConstantAsMetadata::get(Loc)));
}
+ if (getLangOpts().CUDA && getLangOpts().CUDAIsDevice) {
+ // Conservatively, mark all inline asm blocks in CUDA as convergent
+ // (meaning, they may call an intrinsically convergent op, such as bar.sync,
+ // and so can't have certain optimizations applied around them).
+ Result->addAttribute(llvm::AttributeSet::FunctionIndex,
+ llvm::Attribute::Convergent);
+ }
+
// Extract all of the register value results from the asm.
std::vector<llvm::Value*> RegResults;
if (ResultRegTypes.size() == 1) {
@@ -2147,8 +2237,7 @@ CodeGenFunction::GenerateCapturedStmtFunction(const CapturedStmt &S) {
// Create the function declaration.
FunctionType::ExtInfo ExtInfo;
const CGFunctionInfo &FuncInfo =
- CGM.getTypes().arrangeFreeFunctionDeclaration(Ctx.VoidTy, Args, ExtInfo,
- /*IsVariadic=*/false);
+ CGM.getTypes().arrangeBuiltinFunctionDeclaration(Ctx.VoidTy, Args);
llvm::FunctionType *FuncLLVMTy = CGM.getTypes().GetFunctionType(FuncInfo);
llvm::Function *F =
OpenPOWER on IntegriCloud