summaryrefslogtreecommitdiffstats
path: root/contrib/llvm/tools/clang/lib/AST/ExprConstant.cpp
diff options
context:
space:
mode:
authordim <dim@FreeBSD.org>2017-04-02 17:24:58 +0000
committerdim <dim@FreeBSD.org>2017-04-02 17:24:58 +0000
commit60b571e49a90d38697b3aca23020d9da42fc7d7f (patch)
tree99351324c24d6cb146b6285b6caffa4d26fce188 /contrib/llvm/tools/clang/lib/AST/ExprConstant.cpp
parentbea1b22c7a9bce1dfdd73e6e5b65bc4752215180 (diff)
downloadFreeBSD-src-60b571e49a90d38697b3aca23020d9da42fc7d7f.zip
FreeBSD-src-60b571e49a90d38697b3aca23020d9da42fc7d7f.tar.gz
Update clang, llvm, lld, lldb, compiler-rt and libc++ to 4.0.0 release:
MFC r309142 (by emaste): Add WITH_LLD_AS_LD build knob If set it installs LLD as /usr/bin/ld. LLD (as of version 3.9) is not capable of linking the world and kernel, but can self-host and link many substantial applications. GNU ld continues to be used for the world and kernel build, regardless of how this knob is set. It is on by default for arm64, and off for all other CPU architectures. Sponsored by: The FreeBSD Foundation MFC r310840: Reapply 310775, now it also builds correctly if lldb is disabled: Move llvm-objdump from CLANG_EXTRAS to installed by default We currently install three tools from binutils 2.17.50: as, ld, and objdump. Work is underway to migrate to a permissively-licensed tool-chain, with one goal being the retirement of binutils 2.17.50. LLVM's llvm-objdump is intended to be compatible with GNU objdump although it is currently missing some options and may have formatting differences. Enable it by default for testing and further investigation. It may later be changed to install as /usr/bin/objdump, it becomes a fully viable replacement. Reviewed by: emaste Differential Revision: https://reviews.freebsd.org/D8879 MFC r312855 (by emaste): Rename LLD_AS_LD to LLD_IS_LD, for consistency with CLANG_IS_CC Reported by: Dan McGregor <dan.mcgregor usask.ca> MFC r313559 | glebius | 2017-02-10 18:34:48 +0100 (Fri, 10 Feb 2017) | 5 lines Don't check struct rtentry on FreeBSD, it is an internal kernel structure. On other systems it may be API structure for SIOCADDRT/SIOCDELRT. Reviewed by: emaste, dim MFC r314152 (by jkim): Remove an assembler flag, which is redundant since r309124. The upstream took care of it by introducing a macro NO_EXEC_STACK_DIRECTIVE. http://llvm.org/viewvc/llvm-project?rev=273500&view=rev Reviewed by: dim MFC r314564: Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to 4.0.0 (branches/release_40 296509). The release will follow soon. Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11 support to build; see UPDATING for more information. Also note that as of 4.0.0, lld should be able to link the base system on amd64 and aarch64. See the WITH_LLD_IS_LLD setting in src.conf(5). Though please be aware that this is work in progress. Release notes for llvm, clang and lld will be available here: <http://releases.llvm.org/4.0.0/docs/ReleaseNotes.html> <http://releases.llvm.org/4.0.0/tools/clang/docs/ReleaseNotes.html> <http://releases.llvm.org/4.0.0/tools/lld/docs/ReleaseNotes.html> Thanks to Ed Maste, Jan Beich, Antoine Brodin and Eric Fiselier for their help. Relnotes: yes Exp-run: antoine PR: 215969, 216008 MFC r314708: For now, revert r287232 from upstream llvm trunk (by Daniil Fukalov): [SCEV] limit recursion depth of CompareSCEVComplexity Summary: CompareSCEVComplexity goes too deep (50+ on a quite a big unrolled loop) and runs almost infinite time. Added cache of "equal" SCEV pairs to earlier cutoff of further estimation. Recursion depth limit was also introduced as a parameter. Reviewers: sanjoy Subscribers: mzolotukhin, tstellarAMD, llvm-commits Differential Revision: https://reviews.llvm.org/D26389 This commit is the cause of excessive compile times on skein_block.c (and possibly other files) during kernel builds on amd64. We never saw the problematic behavior described in this upstream commit, so for now it is better to revert it. An upstream bug has been filed here: https://bugs.llvm.org/show_bug.cgi?id=32142 Reported by: mjg MFC r314795: Reapply r287232 from upstream llvm trunk (by Daniil Fukalov): [SCEV] limit recursion depth of CompareSCEVComplexity Summary: CompareSCEVComplexity goes too deep (50+ on a quite a big unrolled loop) and runs almost infinite time. Added cache of "equal" SCEV pairs to earlier cutoff of further estimation. Recursion depth limit was also introduced as a parameter. Reviewers: sanjoy Subscribers: mzolotukhin, tstellarAMD, llvm-commits Differential Revision: https://reviews.llvm.org/D26389 Pull in r296992 from upstream llvm trunk (by Sanjoy Das): [SCEV] Decrease the recursion threshold for CompareValueComplexity Fixes PR32142. r287232 accidentally increased the recursion threshold for CompareValueComplexity from 2 to 32. This change reverses that change by introducing a separate flag for CompareValueComplexity's threshold. The latter revision fixes the excessive compile times for skein_block.c. MFC r314907 | mmel | 2017-03-08 12:40:27 +0100 (Wed, 08 Mar 2017) | 7 lines Unbreak ARMv6 world. The new compiler_rt library imported with clang 4.0.0 have several fatal issues (non-functional __udivsi3 for example) with ARM specific instrict functions. As temporary workaround, until upstream solve these problems, disable all thumb[1][2] related feature. MFC r315016: Update clang, llvm, lld, lldb, compiler-rt and libc++ to 4.0.0 release. We were already very close to the last release candidate, so this is a pretty minor update. Relnotes: yes MFC r316005: Revert r314907, and pull in r298713 from upstream compiler-rt trunk (by Weiming Zhao): builtins: Select correct code fragments when compiling for Thumb1/Thum2/ARM ISA. Summary: Value of __ARM_ARCH_ISA_THUMB isn't based on the actual compilation mode (-mthumb, -marm), it reflect's capability of given CPU. Due to this: - use __tbumb__ and __thumb2__ insteand of __ARM_ARCH_ISA_THUMB - use '.thumb' directive consistently in all affected files - decorate all thumb functions using DEFINE_COMPILERRT_THUMB_FUNCTION() --------- Note: This patch doesn't fix broken Thumb1 variant of __udivsi3 ! Reviewers: weimingz, rengolin, compnerd Subscribers: aemerson, dim Differential Revision: https://reviews.llvm.org/D30938 Discussed with: mmel
Diffstat (limited to 'contrib/llvm/tools/clang/lib/AST/ExprConstant.cpp')
-rw-r--r--contrib/llvm/tools/clang/lib/AST/ExprConstant.cpp1231
1 files changed, 955 insertions, 276 deletions
diff --git a/contrib/llvm/tools/clang/lib/AST/ExprConstant.cpp b/contrib/llvm/tools/clang/lib/AST/ExprConstant.cpp
index df944e8..2c0fce9 100644
--- a/contrib/llvm/tools/clang/lib/AST/ExprConstant.cpp
+++ b/contrib/llvm/tools/clang/lib/AST/ExprConstant.cpp
@@ -36,6 +36,7 @@
#include "clang/AST/APValue.h"
#include "clang/AST/ASTContext.h"
#include "clang/AST/ASTDiagnostic.h"
+#include "clang/AST/ASTLambda.h"
#include "clang/AST/CharUnits.h"
#include "clang/AST/Expr.h"
#include "clang/AST/RecordLayout.h"
@@ -43,7 +44,6 @@
#include "clang/AST/TypeLoc.h"
#include "clang/Basic/Builtins.h"
#include "clang/Basic/TargetInfo.h"
-#include "llvm/ADT/SmallString.h"
#include "llvm/Support/raw_ostream.h"
#include <cstring>
#include <functional>
@@ -76,8 +76,8 @@ namespace {
const Expr *Inner = Temp->skipRValueSubobjectAdjustments(CommaLHSs,
Adjustments);
// Keep any cv-qualifiers from the reference if we generated a temporary
- // for it.
- if (Inner != Temp)
+ // for it directly. Otherwise use the type after adjustment.
+ if (!Adjustments.empty())
return Inner->getType();
}
@@ -109,19 +109,57 @@ namespace {
return getAsBaseOrMember(E).getInt();
}
+ /// Given a CallExpr, try to get the alloc_size attribute. May return null.
+ static const AllocSizeAttr *getAllocSizeAttr(const CallExpr *CE) {
+ const FunctionDecl *Callee = CE->getDirectCallee();
+ return Callee ? Callee->getAttr<AllocSizeAttr>() : nullptr;
+ }
+
+ /// Attempts to unwrap a CallExpr (with an alloc_size attribute) from an Expr.
+ /// This will look through a single cast.
+ ///
+ /// Returns null if we couldn't unwrap a function with alloc_size.
+ static const CallExpr *tryUnwrapAllocSizeCall(const Expr *E) {
+ if (!E->getType()->isPointerType())
+ return nullptr;
+
+ E = E->IgnoreParens();
+ // If we're doing a variable assignment from e.g. malloc(N), there will
+ // probably be a cast of some kind. Ignore it.
+ if (const auto *Cast = dyn_cast<CastExpr>(E))
+ E = Cast->getSubExpr()->IgnoreParens();
+
+ if (const auto *CE = dyn_cast<CallExpr>(E))
+ return getAllocSizeAttr(CE) ? CE : nullptr;
+ return nullptr;
+ }
+
+ /// Determines whether or not the given Base contains a call to a function
+ /// with the alloc_size attribute.
+ static bool isBaseAnAllocSizeCall(APValue::LValueBase Base) {
+ const auto *E = Base.dyn_cast<const Expr *>();
+ return E && E->getType()->isPointerType() && tryUnwrapAllocSizeCall(E);
+ }
+
+ /// Determines if an LValue with the given LValueBase will have an unsized
+ /// array in its designator.
/// Find the path length and type of the most-derived subobject in the given
/// path, and find the size of the containing array, if any.
- static
- unsigned findMostDerivedSubobject(ASTContext &Ctx, QualType Base,
- ArrayRef<APValue::LValuePathEntry> Path,
- uint64_t &ArraySize, QualType &Type,
- bool &IsArray) {
+ static unsigned
+ findMostDerivedSubobject(ASTContext &Ctx, APValue::LValueBase Base,
+ ArrayRef<APValue::LValuePathEntry> Path,
+ uint64_t &ArraySize, QualType &Type, bool &IsArray) {
+ // This only accepts LValueBases from APValues, and APValues don't support
+ // arrays that lack size info.
+ assert(!isBaseAnAllocSizeCall(Base) &&
+ "Unsized arrays shouldn't appear here");
unsigned MostDerivedLength = 0;
- Type = Base;
+ Type = getType(Base);
+
for (unsigned I = 0, N = Path.size(); I != N; ++I) {
if (Type->isArrayType()) {
const ConstantArrayType *CAT =
- cast<ConstantArrayType>(Ctx.getAsArrayType(Type));
+ cast<ConstantArrayType>(Ctx.getAsArrayType(Type));
Type = CAT->getElementType();
ArraySize = CAT->getSize().getZExtValue();
MostDerivedLength = I + 1;
@@ -162,17 +200,23 @@ namespace {
/// Is this a pointer one past the end of an object?
unsigned IsOnePastTheEnd : 1;
+ /// Indicator of whether the first entry is an unsized array.
+ unsigned FirstEntryIsAnUnsizedArray : 1;
+
/// Indicator of whether the most-derived object is an array element.
unsigned MostDerivedIsArrayElement : 1;
/// The length of the path to the most-derived object of which this is a
/// subobject.
- unsigned MostDerivedPathLength : 29;
+ unsigned MostDerivedPathLength : 28;
/// The size of the array of which the most-derived object is an element.
/// This will always be 0 if the most-derived object is not an array
/// element. 0 is not an indicator of whether or not the most-derived object
/// is an array, however, because 0-length arrays are allowed.
+ ///
+ /// If the current array is an unsized array, the value of this is
+ /// undefined.
uint64_t MostDerivedArraySize;
/// The type of the most derived object referred to by this address.
@@ -187,23 +231,24 @@ namespace {
explicit SubobjectDesignator(QualType T)
: Invalid(false), IsOnePastTheEnd(false),
- MostDerivedIsArrayElement(false), MostDerivedPathLength(0),
- MostDerivedArraySize(0), MostDerivedType(T) {}
+ FirstEntryIsAnUnsizedArray(false), MostDerivedIsArrayElement(false),
+ MostDerivedPathLength(0), MostDerivedArraySize(0),
+ MostDerivedType(T) {}
SubobjectDesignator(ASTContext &Ctx, const APValue &V)
: Invalid(!V.isLValue() || !V.hasLValuePath()), IsOnePastTheEnd(false),
- MostDerivedIsArrayElement(false), MostDerivedPathLength(0),
- MostDerivedArraySize(0) {
+ FirstEntryIsAnUnsizedArray(false), MostDerivedIsArrayElement(false),
+ MostDerivedPathLength(0), MostDerivedArraySize(0) {
+ assert(V.isLValue() && "Non-LValue used to make an LValue designator?");
if (!Invalid) {
IsOnePastTheEnd = V.isLValueOnePastTheEnd();
ArrayRef<PathEntry> VEntries = V.getLValuePath();
Entries.insert(Entries.end(), VEntries.begin(), VEntries.end());
if (V.getLValueBase()) {
bool IsArray = false;
- MostDerivedPathLength =
- findMostDerivedSubobject(Ctx, getType(V.getLValueBase()),
- V.getLValuePath(), MostDerivedArraySize,
- MostDerivedType, IsArray);
+ MostDerivedPathLength = findMostDerivedSubobject(
+ Ctx, V.getLValueBase(), V.getLValuePath(), MostDerivedArraySize,
+ MostDerivedType, IsArray);
MostDerivedIsArrayElement = IsArray;
}
}
@@ -214,12 +259,26 @@ namespace {
Entries.clear();
}
+ /// Determine whether the most derived subobject is an array without a
+ /// known bound.
+ bool isMostDerivedAnUnsizedArray() const {
+ assert(!Invalid && "Calling this makes no sense on invalid designators");
+ return Entries.size() == 1 && FirstEntryIsAnUnsizedArray;
+ }
+
+ /// Determine what the most derived array's size is. Results in an assertion
+ /// failure if the most derived array lacks a size.
+ uint64_t getMostDerivedArraySize() const {
+ assert(!isMostDerivedAnUnsizedArray() && "Unsized array has no size");
+ return MostDerivedArraySize;
+ }
+
/// Determine whether this is a one-past-the-end pointer.
bool isOnePastTheEnd() const {
assert(!Invalid);
if (IsOnePastTheEnd)
return true;
- if (MostDerivedIsArrayElement &&
+ if (!isMostDerivedAnUnsizedArray() && MostDerivedIsArrayElement &&
Entries[MostDerivedPathLength - 1].ArrayIndex == MostDerivedArraySize)
return true;
return false;
@@ -247,6 +306,21 @@ namespace {
MostDerivedArraySize = CAT->getSize().getZExtValue();
MostDerivedPathLength = Entries.size();
}
+ /// Update this designator to refer to the first element within the array of
+ /// elements of type T. This is an array of unknown size.
+ void addUnsizedArrayUnchecked(QualType ElemTy) {
+ PathEntry Entry;
+ Entry.ArrayIndex = 0;
+ Entries.push_back(Entry);
+
+ MostDerivedType = ElemTy;
+ MostDerivedIsArrayElement = true;
+ // The value in MostDerivedArraySize is undefined in this case. So, set it
+ // to an arbitrary value that's likely to loudly break things if it's
+ // used.
+ MostDerivedArraySize = std::numeric_limits<uint64_t>::max() / 2;
+ MostDerivedPathLength = Entries.size();
+ }
/// Update this designator to refer to the given base or member of this
/// object.
void addDeclUnchecked(const Decl *D, bool Virtual = false) {
@@ -280,10 +354,16 @@ namespace {
/// Add N to the address of this subobject.
void adjustIndex(EvalInfo &Info, const Expr *E, uint64_t N) {
if (Invalid) return;
+ if (isMostDerivedAnUnsizedArray()) {
+ // Can't verify -- trust that the user is doing the right thing (or if
+ // not, trust that the caller will catch the bad behavior).
+ Entries.back().ArrayIndex += N;
+ return;
+ }
if (MostDerivedPathLength == Entries.size() &&
MostDerivedIsArrayElement) {
Entries.back().ArrayIndex += N;
- if (Entries.back().ArrayIndex > MostDerivedArraySize) {
+ if (Entries.back().ArrayIndex > getMostDerivedArraySize()) {
diagnosePointerArithmetic(Info, E, Entries.back().ArrayIndex);
setInvalid();
}
@@ -310,15 +390,9 @@ namespace {
/// Parent - The caller of this stack frame.
CallStackFrame *Caller;
- /// CallLoc - The location of the call expression for this call.
- SourceLocation CallLoc;
-
/// Callee - The function which was called.
const FunctionDecl *Callee;
- /// Index - The call index of this call.
- unsigned Index;
-
/// This - The binding for the this pointer in this call, if any.
const LValue *This;
@@ -333,6 +407,12 @@ namespace {
/// Temporaries - Temporary lvalues materialized within this stack frame.
MapTy Temporaries;
+ /// CallLoc - The location of the call expression for this call.
+ SourceLocation CallLoc;
+
+ /// Index - The call index of this call.
+ unsigned Index;
+
CallStackFrame(EvalInfo &Info, SourceLocation CallLoc,
const FunctionDecl *Callee, const LValue *This,
APValue *Arguments);
@@ -433,7 +513,7 @@ namespace {
/// rules. For example, the RHS of (0 && foo()) is not evaluated. We can
/// evaluate the expression regardless of what the RHS is, but C only allows
/// certain things in certain situations.
- struct EvalInfo {
+ struct LLVM_ALIGNAS(/*alignof(uint64_t)*/ 8) EvalInfo {
ASTContext &Ctx;
/// EvalStatus - Contains information about the evaluation.
@@ -469,6 +549,10 @@ namespace {
/// declaration whose initializer is being evaluated, if any.
APValue *EvaluatingDeclValue;
+ /// The current array initialization index, if we're performing array
+ /// initialization.
+ uint64_t ArrayInitIndex = -1;
+
/// HasActiveDiagnostic - Was the previous diagnostic stored? If so, further
/// notes attached to it will also be stored, otherwise they will not be.
bool HasActiveDiagnostic;
@@ -520,9 +604,17 @@ namespace {
/// gets a chance to look at it.
EM_PotentialConstantExpressionUnevaluated,
- /// Evaluate as a constant expression. Continue evaluating if we find a
- /// MemberExpr with a base that can't be evaluated.
- EM_DesignatorFold,
+ /// Evaluate as a constant expression. In certain scenarios, if:
+ /// - we find a MemberExpr with a base that can't be evaluated, or
+ /// - we find a variable initialized with a call to a function that has
+ /// the alloc_size attribute on it
+ /// then we may consider evaluation to have succeeded.
+ ///
+ /// In either case, the LValue returned shall have an invalid base; in the
+ /// former, the base will be the invalid MemberExpr, in the latter, the
+ /// base will be either the alloc_size CallExpr or a CastExpr wrapping
+ /// said CallExpr.
+ EM_OffsetFold,
} EvalMode;
/// Are we checking whether the expression is a potential constant
@@ -624,7 +716,7 @@ namespace {
case EM_PotentialConstantExpression:
case EM_ConstantExpressionUnevaluated:
case EM_PotentialConstantExpressionUnevaluated:
- case EM_DesignatorFold:
+ case EM_OffsetFold:
HasActiveDiagnostic = false;
return OptionalDiagnostic();
}
@@ -716,7 +808,7 @@ namespace {
case EM_ConstantExpression:
case EM_ConstantExpressionUnevaluated:
case EM_ConstantFold:
- case EM_DesignatorFold:
+ case EM_OffsetFold:
return false;
}
llvm_unreachable("Missed EvalMode case");
@@ -735,7 +827,7 @@ namespace {
case EM_EvaluateForOverflow:
case EM_IgnoreSideEffects:
case EM_ConstantFold:
- case EM_DesignatorFold:
+ case EM_OffsetFold:
return true;
case EM_PotentialConstantExpression:
@@ -771,7 +863,7 @@ namespace {
case EM_ConstantExpressionUnevaluated:
case EM_ConstantFold:
case EM_IgnoreSideEffects:
- case EM_DesignatorFold:
+ case EM_OffsetFold:
return false;
}
llvm_unreachable("Missed EvalMode case");
@@ -787,7 +879,7 @@ namespace {
/// (Foo(), 1) // use noteSideEffect
/// (Foo() || true) // use noteSideEffect
/// Foo() + 1 // use noteFailure
- LLVM_ATTRIBUTE_UNUSED_RESULT bool noteFailure() {
+ LLVM_NODISCARD bool noteFailure() {
// Failure when evaluating some expression often means there is some
// subexpression whose evaluation was skipped. Therefore, (because we
// don't track whether we skipped an expression when unwinding after an
@@ -800,9 +892,19 @@ namespace {
return KeepGoing;
}
- bool allowInvalidBaseExpr() const {
- return EvalMode == EM_DesignatorFold;
- }
+ class ArrayInitLoopIndex {
+ EvalInfo &Info;
+ uint64_t OuterIndex;
+
+ public:
+ ArrayInitLoopIndex(EvalInfo &Info)
+ : Info(Info), OuterIndex(Info.ArrayInitIndex) {
+ Info.ArrayInitIndex = 0;
+ }
+ ~ArrayInitLoopIndex() { Info.ArrayInitIndex = OuterIndex; }
+
+ operator uint64_t&() { return Info.ArrayInitIndex; }
+ };
};
/// Object used to treat all foldable expressions as constant expressions.
@@ -838,11 +940,10 @@ namespace {
struct FoldOffsetRAII {
EvalInfo &Info;
EvalInfo::EvaluationMode OldMode;
- explicit FoldOffsetRAII(EvalInfo &Info, bool Subobject)
+ explicit FoldOffsetRAII(EvalInfo &Info)
: Info(Info), OldMode(Info.EvalMode) {
if (!Info.checkingPotentialConstantExpression())
- Info.EvalMode = Subobject ? EvalInfo::EM_DesignatorFold
- : EvalInfo::EM_ConstantFold;
+ Info.EvalMode = EvalInfo::EM_OffsetFold;
}
~FoldOffsetRAII() { Info.EvalMode = OldMode; }
@@ -948,10 +1049,12 @@ bool SubobjectDesignator::checkSubobject(EvalInfo &Info, const Expr *E,
void SubobjectDesignator::diagnosePointerArithmetic(EvalInfo &Info,
const Expr *E, uint64_t N) {
+ // If we're complaining, we must be able to statically determine the size of
+ // the most derived array.
if (MostDerivedPathLength == Entries.size() && MostDerivedIsArrayElement)
Info.CCEDiag(E, diag::note_constexpr_array_index)
<< static_cast<int>(N) << /*array*/ 0
- << static_cast<unsigned>(MostDerivedArraySize);
+ << static_cast<unsigned>(getMostDerivedArraySize());
else
Info.CCEDiag(E, diag::note_constexpr_array_index)
<< static_cast<int>(N) << /*non-array*/ 1;
@@ -961,8 +1064,8 @@ void SubobjectDesignator::diagnosePointerArithmetic(EvalInfo &Info,
CallStackFrame::CallStackFrame(EvalInfo &Info, SourceLocation CallLoc,
const FunctionDecl *Callee, const LValue *This,
APValue *Arguments)
- : Info(Info), Caller(Info.CurrentCall), CallLoc(CallLoc), Callee(Callee),
- Index(Info.NextCallIndex++), This(This), Arguments(Arguments) {
+ : Info(Info), Caller(Info.CurrentCall), Callee(Callee), This(This),
+ Arguments(Arguments), CallLoc(CallLoc), Index(Info.NextCallIndex++) {
Info.CurrentCall = this;
++Info.CallStackDepth;
}
@@ -1032,7 +1135,7 @@ namespace {
APSInt IntReal, IntImag;
APFloat FloatReal, FloatImag;
- ComplexValue() : FloatReal(APFloat::Bogus), FloatImag(APFloat::Bogus) {}
+ ComplexValue() : FloatReal(APFloat::Bogus()), FloatImag(APFloat::Bogus()) {}
void makeComplexFloat() { IsInt = false; }
bool isComplexFloat() const { return !IsInt; }
@@ -1070,6 +1173,7 @@ namespace {
unsigned InvalidBase : 1;
unsigned CallIndex : 31;
SubobjectDesignator Designator;
+ bool IsNullPtr;
const APValue::LValueBase getLValueBase() const { return Base; }
CharUnits &getLValueOffset() { return Offset; }
@@ -1077,29 +1181,47 @@ namespace {
unsigned getLValueCallIndex() const { return CallIndex; }
SubobjectDesignator &getLValueDesignator() { return Designator; }
const SubobjectDesignator &getLValueDesignator() const { return Designator;}
+ bool isNullPointer() const { return IsNullPtr;}
void moveInto(APValue &V) const {
if (Designator.Invalid)
- V = APValue(Base, Offset, APValue::NoLValuePath(), CallIndex);
- else
+ V = APValue(Base, Offset, APValue::NoLValuePath(), CallIndex,
+ IsNullPtr);
+ else {
+ assert(!InvalidBase && "APValues can't handle invalid LValue bases");
+ assert(!Designator.FirstEntryIsAnUnsizedArray &&
+ "Unsized array with a valid base?");
V = APValue(Base, Offset, Designator.Entries,
- Designator.IsOnePastTheEnd, CallIndex);
+ Designator.IsOnePastTheEnd, CallIndex, IsNullPtr);
+ }
}
void setFrom(ASTContext &Ctx, const APValue &V) {
- assert(V.isLValue());
+ assert(V.isLValue() && "Setting LValue from a non-LValue?");
Base = V.getLValueBase();
Offset = V.getLValueOffset();
InvalidBase = false;
CallIndex = V.getLValueCallIndex();
Designator = SubobjectDesignator(Ctx, V);
+ IsNullPtr = V.isNullPointer();
}
- void set(APValue::LValueBase B, unsigned I = 0, bool BInvalid = false) {
+ void set(APValue::LValueBase B, unsigned I = 0, bool BInvalid = false,
+ bool IsNullPtr_ = false, uint64_t Offset_ = 0) {
+#ifndef NDEBUG
+ // We only allow a few types of invalid bases. Enforce that here.
+ if (BInvalid) {
+ const auto *E = B.get<const Expr *>();
+ assert((isa<MemberExpr>(E) || tryUnwrapAllocSizeCall(E)) &&
+ "Unexpected type of invalid base");
+ }
+#endif
+
Base = B;
- Offset = CharUnits::Zero();
+ Offset = CharUnits::fromQuantity(Offset_);
InvalidBase = BInvalid;
CallIndex = I;
Designator = SubobjectDesignator(getType(B));
+ IsNullPtr = IsNullPtr_;
}
void setInvalid(APValue::LValueBase B, unsigned I = 0) {
@@ -1112,7 +1234,7 @@ namespace {
CheckSubobjectKind CSK) {
if (Designator.Invalid)
return false;
- if (!Base) {
+ if (IsNullPtr) {
Info.CCEDiag(E, diag::note_constexpr_null_subobject)
<< CSK;
Designator.setInvalid();
@@ -1133,6 +1255,13 @@ namespace {
if (checkSubobject(Info, E, isa<FieldDecl>(D) ? CSK_Field : CSK_Base))
Designator.addDeclUnchecked(D, Virtual);
}
+ void addUnsizedArray(EvalInfo &Info, QualType ElemTy) {
+ assert(Designator.Entries.empty() && getType(Base)->isPointerType());
+ assert(isBaseAnAllocSizeCall(Base) &&
+ "Only alloc_size bases can have unsized arrays");
+ Designator.FirstEntryIsAnUnsizedArray = true;
+ Designator.addUnsizedArrayUnchecked(ElemTy);
+ }
void addArray(EvalInfo &Info, const Expr *E, const ConstantArrayType *CAT) {
if (checkSubobject(Info, E, CSK_ArrayToPointer))
Designator.addArrayUnchecked(CAT);
@@ -1141,9 +1270,22 @@ namespace {
if (checkSubobject(Info, E, Imag ? CSK_Imag : CSK_Real))
Designator.addComplexUnchecked(EltTy, Imag);
}
- void adjustIndex(EvalInfo &Info, const Expr *E, uint64_t N) {
- if (N && checkNullPointer(Info, E, CSK_ArrayIndex))
- Designator.adjustIndex(Info, E, N);
+ void clearIsNullPointer() {
+ IsNullPtr = false;
+ }
+ void adjustOffsetAndIndex(EvalInfo &Info, const Expr *E, uint64_t Index,
+ CharUnits ElementSize) {
+ // Compute the new offset in the appropriate width.
+ Offset += Index * ElementSize;
+ if (Index && checkNullPointer(Info, E, CSK_ArrayIndex))
+ Designator.adjustIndex(Info, E, Index);
+ if (Index)
+ clearIsNullPointer();
+ }
+ void adjustOffset(CharUnits N) {
+ Offset += N;
+ if (N.getQuantity())
+ clearIsNullPointer();
}
};
@@ -1250,8 +1392,10 @@ static bool Evaluate(APValue &Result, EvalInfo &Info, const Expr *E);
static bool EvaluateInPlace(APValue &Result, EvalInfo &Info,
const LValue &This, const Expr *E,
bool AllowNonLiteralTypes = false);
-static bool EvaluateLValue(const Expr *E, LValue &Result, EvalInfo &Info);
-static bool EvaluatePointer(const Expr *E, LValue &Result, EvalInfo &Info);
+static bool EvaluateLValue(const Expr *E, LValue &Result, EvalInfo &Info,
+ bool InvalidBaseOK = false);
+static bool EvaluatePointer(const Expr *E, LValue &Result, EvalInfo &Info,
+ bool InvalidBaseOK = false);
static bool EvaluateMemberPointer(const Expr *E, MemberPtr &Result,
EvalInfo &Info);
static bool EvaluateTemporary(const Expr *E, LValue &Result, EvalInfo &Info);
@@ -1483,8 +1627,17 @@ static bool CheckLiteralType(EvalInfo &Info, const Expr *E,
// C++1y: A constant initializer for an object o [...] may also invoke
// constexpr constructors for o and its subobjects even if those objects
// are of non-literal class types.
- if (Info.getLangOpts().CPlusPlus14 && This &&
- Info.EvaluatingDecl == This->getLValueBase())
+ //
+ // C++11 missed this detail for aggregates, so classes like this:
+ // struct foo_t { union { int i; volatile int j; } u; };
+ // are not (obviously) initializable like so:
+ // __attribute__((__require_constant_initialization__))
+ // static const foo_t x = {{0}};
+ // because "i" is a subobject with non-literal initialization (due to the
+ // volatile member of the union). See:
+ // http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#1677
+ // Therefore, we use the C++1y behavior.
+ if (This && Info.EvaluatingDecl == This->getLValueBase())
return true;
// Prvalue constant expressions must be of literal types.
@@ -2018,7 +2171,7 @@ static bool HandleLValueMember(EvalInfo &Info, const Expr *E, LValue &LVal,
}
unsigned I = FD->getFieldIndex();
- LVal.Offset += Info.Ctx.toCharUnitsFromBits(RL->getFieldOffset(I));
+ LVal.adjustOffset(Info.Ctx.toCharUnitsFromBits(RL->getFieldOffset(I)));
LVal.addDecl(Info, E, FD);
return true;
}
@@ -2072,9 +2225,7 @@ static bool HandleLValueArrayAdjustment(EvalInfo &Info, const Expr *E,
if (!HandleSizeof(Info, E->getExprLoc(), EltTy, SizeOfPointee))
return false;
- // Compute the new offset in the appropriate width.
- LVal.Offset += Adjustment * SizeOfPointee;
- LVal.adjustIndex(Info, E, Adjustment);
+ LVal.adjustOffsetAndIndex(Info, E, Adjustment, SizeOfPointee);
return true;
}
@@ -2125,7 +2276,22 @@ static bool evaluateVarDeclInit(EvalInfo &Info, const Expr *E,
// If this is a local variable, dig out its value.
if (Frame) {
Result = Frame->getTemporary(VD);
- assert(Result && "missing value for local variable");
+ if (!Result) {
+ // Assume variables referenced within a lambda's call operator that were
+ // not declared within the call operator are captures and during checking
+ // of a potential constant expression, assume they are unknown constant
+ // expressions.
+ assert(isLambdaCallOperator(Frame->Callee) &&
+ (VD->getDeclContext() != Frame->Callee || VD->isInitCapture()) &&
+ "missing value for local variable");
+ if (Info.checkingPotentialConstantExpression())
+ return false;
+ // FIXME: implement capture evaluation during constant expr evaluation.
+ Info.FFDiag(E->getLocStart(),
+ diag::note_unimplemented_constexpr_lambda_feature_ast)
+ << "captures not currently allowed";
+ return false;
+ }
return true;
}
@@ -2196,7 +2362,14 @@ static unsigned getBaseIndex(const CXXRecordDecl *Derived,
/// Extract the value of a character from a string literal.
static APSInt extractStringLiteralCharacter(EvalInfo &Info, const Expr *Lit,
uint64_t Index) {
- // FIXME: Support ObjCEncodeExpr, MakeStringConstant
+ // FIXME: Support MakeStringConstant
+ if (const auto *ObjCEnc = dyn_cast<ObjCEncodeExpr>(Lit)) {
+ std::string Str;
+ Info.Ctx.getObjCEncodingForType(ObjCEnc->getEncodedType(), Str);
+ assert(Index <= Str.size() && "Index too large");
+ return APSInt::getUnsigned(Str.c_str()[Index]);
+ }
+
if (auto PE = dyn_cast<PredefinedExpr>(Lit))
Lit = PE->getFunctionName();
const StringLiteral *S = cast<StringLiteral>(Lit);
@@ -2771,6 +2944,9 @@ static CompleteObject findCompleteObject(EvalInfo &Info, const Expr *E,
} else {
Info.CCEDiag(E);
}
+ } else if (BaseType.isConstQualified() && VD->hasDefinition(Info.Ctx)) {
+ Info.CCEDiag(E, diag::note_constexpr_ltor_non_constexpr) << VD;
+ // Keep evaluating to see what we can do.
} else {
// FIXME: Allow folding of values of any literal type in all languages.
if (Info.checkingPotentialConstantExpression() &&
@@ -2892,7 +3068,6 @@ static bool handleLValueToRValueConversion(EvalInfo &Info, const Expr *Conv,
// In C99, a CompoundLiteralExpr is an lvalue, and we defer evaluating the
// initializer until now for such expressions. Such an expression can't be
// an ICE in C, so this only matters for fold.
- assert(!Info.getLangOpts().CPlusPlus && "lvalue compound literal in c++?");
if (Type.isVolatileQualified()) {
Info.FFDiag(Conv);
return false;
@@ -3385,38 +3560,51 @@ enum EvalStmtResult {
};
}
-static bool EvaluateDecl(EvalInfo &Info, const Decl *D) {
- if (const VarDecl *VD = dyn_cast<VarDecl>(D)) {
- // We don't need to evaluate the initializer for a static local.
- if (!VD->hasLocalStorage())
- return true;
+static bool EvaluateVarDecl(EvalInfo &Info, const VarDecl *VD) {
+ // We don't need to evaluate the initializer for a static local.
+ if (!VD->hasLocalStorage())
+ return true;
- LValue Result;
- Result.set(VD, Info.CurrentCall->Index);
- APValue &Val = Info.CurrentCall->createTemporary(VD, true);
+ LValue Result;
+ Result.set(VD, Info.CurrentCall->Index);
+ APValue &Val = Info.CurrentCall->createTemporary(VD, true);
- const Expr *InitE = VD->getInit();
- if (!InitE) {
- Info.FFDiag(D->getLocStart(), diag::note_constexpr_uninitialized)
- << false << VD->getType();
- Val = APValue();
- return false;
- }
+ const Expr *InitE = VD->getInit();
+ if (!InitE) {
+ Info.FFDiag(VD->getLocStart(), diag::note_constexpr_uninitialized)
+ << false << VD->getType();
+ Val = APValue();
+ return false;
+ }
- if (InitE->isValueDependent())
- return false;
+ if (InitE->isValueDependent())
+ return false;
- if (!EvaluateInPlace(Val, Info, Result, InitE)) {
- // Wipe out any partially-computed value, to allow tracking that this
- // evaluation failed.
- Val = APValue();
- return false;
- }
+ if (!EvaluateInPlace(Val, Info, Result, InitE)) {
+ // Wipe out any partially-computed value, to allow tracking that this
+ // evaluation failed.
+ Val = APValue();
+ return false;
}
return true;
}
+static bool EvaluateDecl(EvalInfo &Info, const Decl *D) {
+ bool OK = true;
+
+ if (const VarDecl *VD = dyn_cast<VarDecl>(D))
+ OK &= EvaluateVarDecl(Info, VD);
+
+ if (const DecompositionDecl *DD = dyn_cast<DecompositionDecl>(D))
+ for (auto *BD : DD->bindings())
+ if (auto *VD = BD->getHoldingVar())
+ OK &= EvaluateDecl(Info, VD);
+
+ return OK;
+}
+
+
/// Evaluate a condition (either a variable declaration or an expression).
static bool EvaluateCond(EvalInfo &Info, const VarDecl *CondDecl,
const Expr *Cond, bool &Result) {
@@ -4371,6 +4559,12 @@ public:
Call.getLValueBase().dyn_cast<const ValueDecl*>());
if (!FD)
return Error(Callee);
+ // Don't call function pointers which have been cast to some other type.
+ // Per DR (no number yet), the caller and callee can differ in noexcept.
+ if (!Info.Ctx.hasSameFunctionTypeIgnoringExceptionSpec(
+ CalleeType->getPointeeType(), FD->getType())) {
+ return Error(E);
+ }
// Overloaded operator calls to member functions are represented as normal
// calls with '*this' as the first argument.
@@ -4386,11 +4580,42 @@ public:
return false;
This = &ThisVal;
Args = Args.slice(1);
+ } else if (MD && MD->isLambdaStaticInvoker()) {
+ // Map the static invoker for the lambda back to the call operator.
+ // Conveniently, we don't have to slice out the 'this' argument (as is
+ // being done for the non-static case), since a static member function
+ // doesn't have an implicit argument passed in.
+ const CXXRecordDecl *ClosureClass = MD->getParent();
+ assert(
+ ClosureClass->captures_begin() == ClosureClass->captures_end() &&
+ "Number of captures must be zero for conversion to function-ptr");
+
+ const CXXMethodDecl *LambdaCallOp =
+ ClosureClass->getLambdaCallOperator();
+
+ // Set 'FD', the function that will be called below, to the call
+ // operator. If the closure object represents a generic lambda, find
+ // the corresponding specialization of the call operator.
+
+ if (ClosureClass->isGenericLambda()) {
+ assert(MD->isFunctionTemplateSpecialization() &&
+ "A generic lambda's static-invoker function must be a "
+ "template specialization");
+ const TemplateArgumentList *TAL = MD->getTemplateSpecializationArgs();
+ FunctionTemplateDecl *CallOpTemplate =
+ LambdaCallOp->getDescribedFunctionTemplate();
+ void *InsertPos = nullptr;
+ FunctionDecl *CorrespondingCallOpSpecialization =
+ CallOpTemplate->findSpecialization(TAL->asArray(), InsertPos);
+ assert(CorrespondingCallOpSpecialization &&
+ "We must always have a function call operator specialization "
+ "that corresponds to our static invoker specialization");
+ FD = cast<CXXMethodDecl>(CorrespondingCallOpSpecialization);
+ } else
+ FD = LambdaCallOp;
}
- // Don't call function pointers which have been cast to some other type.
- if (!Info.Ctx.hasSameType(CalleeType->getPointeeType(), FD->getType()))
- return Error(E);
+
} else
return Error(E);
@@ -4578,6 +4803,7 @@ class LValueExprEvaluatorBase
: public ExprEvaluatorBase<Derived> {
protected:
LValue &Result;
+ bool InvalidBaseOK;
typedef LValueExprEvaluatorBase LValueExprEvaluatorBaseTy;
typedef ExprEvaluatorBase<Derived> ExprEvaluatorBaseTy;
@@ -4586,9 +4812,14 @@ protected:
return true;
}
+ bool evaluatePointer(const Expr *E, LValue &Result) {
+ return EvaluatePointer(E, Result, this->Info, InvalidBaseOK);
+ }
+
public:
- LValueExprEvaluatorBase(EvalInfo &Info, LValue &Result) :
- ExprEvaluatorBaseTy(Info), Result(Result) {}
+ LValueExprEvaluatorBase(EvalInfo &Info, LValue &Result, bool InvalidBaseOK)
+ : ExprEvaluatorBaseTy(Info), Result(Result),
+ InvalidBaseOK(InvalidBaseOK) {}
bool Success(const APValue &V, const Expr *E) {
Result.setFrom(this->Info.Ctx, V);
@@ -4600,7 +4831,7 @@ public:
QualType BaseTy;
bool EvalOK;
if (E->isArrow()) {
- EvalOK = EvaluatePointer(E->getBase(), Result, this->Info);
+ EvalOK = evaluatePointer(E->getBase(), Result);
BaseTy = E->getBase()->getType()->castAs<PointerType>()->getPointeeType();
} else if (E->getBase()->isRValue()) {
assert(E->getBase()->getType()->isRecordType());
@@ -4611,7 +4842,7 @@ public:
BaseTy = E->getBase()->getType();
}
if (!EvalOK) {
- if (!this->Info.allowInvalidBaseExpr())
+ if (!InvalidBaseOK)
return false;
Result.setInvalid(E);
return true;
@@ -4683,7 +4914,7 @@ public:
// * VarDecl
// * FunctionDecl
// - Literals
-// * CompoundLiteralExpr in C
+// * CompoundLiteralExpr in C (and in global scope in C++)
// * StringLiteral
// * CXXTypeidExpr
// * PredefinedExpr
@@ -4705,8 +4936,8 @@ namespace {
class LValueExprEvaluator
: public LValueExprEvaluatorBase<LValueExprEvaluator> {
public:
- LValueExprEvaluator(EvalInfo &Info, LValue &Result) :
- LValueExprEvaluatorBaseTy(Info, Result) {}
+ LValueExprEvaluator(EvalInfo &Info, LValue &Result, bool InvalidBaseOK) :
+ LValueExprEvaluatorBaseTy(Info, Result, InvalidBaseOK) {}
bool VisitVarDecl(const Expr *E, const VarDecl *VD);
bool VisitUnaryPreIncDec(const UnaryOperator *UO);
@@ -4759,10 +4990,11 @@ public:
/// * function designators in C, and
/// * "extern void" objects
/// * @selector() expressions in Objective-C
-static bool EvaluateLValue(const Expr *E, LValue &Result, EvalInfo &Info) {
+static bool EvaluateLValue(const Expr *E, LValue &Result, EvalInfo &Info,
+ bool InvalidBaseOK) {
assert(E->isGLValue() || E->getType()->isFunctionType() ||
E->getType()->isVoidType() || isa<ObjCSelectorExpr>(E));
- return LValueExprEvaluator(Info, Result).Visit(E);
+ return LValueExprEvaluator(Info, Result, InvalidBaseOK).Visit(E);
}
bool LValueExprEvaluator::VisitDeclRefExpr(const DeclRefExpr *E) {
@@ -4770,13 +5002,26 @@ bool LValueExprEvaluator::VisitDeclRefExpr(const DeclRefExpr *E) {
return Success(FD);
if (const VarDecl *VD = dyn_cast<VarDecl>(E->getDecl()))
return VisitVarDecl(E, VD);
+ if (const BindingDecl *BD = dyn_cast<BindingDecl>(E->getDecl()))
+ return Visit(BD->getBinding());
return Error(E);
}
+
bool LValueExprEvaluator::VisitVarDecl(const Expr *E, const VarDecl *VD) {
CallStackFrame *Frame = nullptr;
- if (VD->hasLocalStorage() && Info.CurrentCall->Index > 1)
- Frame = Info.CurrentCall;
+ if (VD->hasLocalStorage() && Info.CurrentCall->Index > 1) {
+ // Only if a local variable was declared in the function currently being
+ // evaluated, do we expect to be able to find its value in the current
+ // frame. (Otherwise it was likely declared in an enclosing context and
+ // could either have a valid evaluatable value (for e.g. a constexpr
+ // variable) or be ill-formed (and trigger an appropriate evaluation
+ // diagnostic)).
+ if (Info.CurrentCall->Callee &&
+ Info.CurrentCall->Callee->Equals(VD->getDeclContext())) {
+ Frame = Info.CurrentCall;
+ }
+ }
if (!VD->getType()->isReferenceType()) {
if (Frame) {
@@ -4865,7 +5110,8 @@ bool LValueExprEvaluator::VisitMaterializeTemporaryExpr(
bool
LValueExprEvaluator::VisitCompoundLiteralExpr(const CompoundLiteralExpr *E) {
- assert(!Info.getLangOpts().CPlusPlus && "lvalue compound literal in c++?");
+ assert((!Info.getLangOpts().CPlusPlus || E->isFileScope()) &&
+ "lvalue compound literal in c++?");
// Defer visiting the literal until the lvalue-to-rvalue conversion. We can
// only see this when folding in C, so there's no standard to follow here.
return Success(E);
@@ -4909,7 +5155,7 @@ bool LValueExprEvaluator::VisitArraySubscriptExpr(const ArraySubscriptExpr *E) {
if (E->getBase()->getType()->isVectorType())
return Error(E);
- if (!EvaluatePointer(E->getBase(), Result, Info))
+ if (!evaluatePointer(E->getBase(), Result))
return false;
APSInt Index;
@@ -4921,7 +5167,7 @@ bool LValueExprEvaluator::VisitArraySubscriptExpr(const ArraySubscriptExpr *E) {
}
bool LValueExprEvaluator::VisitUnaryDeref(const UnaryOperator *E) {
- return EvaluatePointer(E->getSubExpr(), Result, Info);
+ return evaluatePointer(E->getSubExpr(), Result);
}
bool LValueExprEvaluator::VisitUnaryReal(const UnaryOperator *E) {
@@ -5000,26 +5246,139 @@ bool LValueExprEvaluator::VisitBinAssign(const BinaryOperator *E) {
// Pointer Evaluation
//===----------------------------------------------------------------------===//
+/// \brief Attempts to compute the number of bytes available at the pointer
+/// returned by a function with the alloc_size attribute. Returns true if we
+/// were successful. Places an unsigned number into `Result`.
+///
+/// This expects the given CallExpr to be a call to a function with an
+/// alloc_size attribute.
+static bool getBytesReturnedByAllocSizeCall(const ASTContext &Ctx,
+ const CallExpr *Call,
+ llvm::APInt &Result) {
+ const AllocSizeAttr *AllocSize = getAllocSizeAttr(Call);
+
+ // alloc_size args are 1-indexed, 0 means not present.
+ assert(AllocSize && AllocSize->getElemSizeParam() != 0);
+ unsigned SizeArgNo = AllocSize->getElemSizeParam() - 1;
+ unsigned BitsInSizeT = Ctx.getTypeSize(Ctx.getSizeType());
+ if (Call->getNumArgs() <= SizeArgNo)
+ return false;
+
+ auto EvaluateAsSizeT = [&](const Expr *E, APSInt &Into) {
+ if (!E->EvaluateAsInt(Into, Ctx, Expr::SE_AllowSideEffects))
+ return false;
+ if (Into.isNegative() || !Into.isIntN(BitsInSizeT))
+ return false;
+ Into = Into.zextOrSelf(BitsInSizeT);
+ return true;
+ };
+
+ APSInt SizeOfElem;
+ if (!EvaluateAsSizeT(Call->getArg(SizeArgNo), SizeOfElem))
+ return false;
+
+ if (!AllocSize->getNumElemsParam()) {
+ Result = std::move(SizeOfElem);
+ return true;
+ }
+
+ APSInt NumberOfElems;
+ // Argument numbers start at 1
+ unsigned NumArgNo = AllocSize->getNumElemsParam() - 1;
+ if (!EvaluateAsSizeT(Call->getArg(NumArgNo), NumberOfElems))
+ return false;
+
+ bool Overflow;
+ llvm::APInt BytesAvailable = SizeOfElem.umul_ov(NumberOfElems, Overflow);
+ if (Overflow)
+ return false;
+
+ Result = std::move(BytesAvailable);
+ return true;
+}
+
+/// \brief Convenience function. LVal's base must be a call to an alloc_size
+/// function.
+static bool getBytesReturnedByAllocSizeCall(const ASTContext &Ctx,
+ const LValue &LVal,
+ llvm::APInt &Result) {
+ assert(isBaseAnAllocSizeCall(LVal.getLValueBase()) &&
+ "Can't get the size of a non alloc_size function");
+ const auto *Base = LVal.getLValueBase().get<const Expr *>();
+ const CallExpr *CE = tryUnwrapAllocSizeCall(Base);
+ return getBytesReturnedByAllocSizeCall(Ctx, CE, Result);
+}
+
+/// \brief Attempts to evaluate the given LValueBase as the result of a call to
+/// a function with the alloc_size attribute. If it was possible to do so, this
+/// function will return true, make Result's Base point to said function call,
+/// and mark Result's Base as invalid.
+static bool evaluateLValueAsAllocSize(EvalInfo &Info, APValue::LValueBase Base,
+ LValue &Result) {
+ if (Base.isNull())
+ return false;
+
+ // Because we do no form of static analysis, we only support const variables.
+ //
+ // Additionally, we can't support parameters, nor can we support static
+ // variables (in the latter case, use-before-assign isn't UB; in the former,
+ // we have no clue what they'll be assigned to).
+ const auto *VD =
+ dyn_cast_or_null<VarDecl>(Base.dyn_cast<const ValueDecl *>());
+ if (!VD || !VD->isLocalVarDecl() || !VD->getType().isConstQualified())
+ return false;
+
+ const Expr *Init = VD->getAnyInitializer();
+ if (!Init)
+ return false;
+
+ const Expr *E = Init->IgnoreParens();
+ if (!tryUnwrapAllocSizeCall(E))
+ return false;
+
+ // Store E instead of E unwrapped so that the type of the LValue's base is
+ // what the user wanted.
+ Result.setInvalid(E);
+
+ QualType Pointee = E->getType()->castAs<PointerType>()->getPointeeType();
+ Result.addUnsizedArray(Info, Pointee);
+ return true;
+}
+
namespace {
class PointerExprEvaluator
: public ExprEvaluatorBase<PointerExprEvaluator> {
LValue &Result;
+ bool InvalidBaseOK;
bool Success(const Expr *E) {
Result.set(E);
return true;
}
+
+ bool evaluateLValue(const Expr *E, LValue &Result) {
+ return EvaluateLValue(E, Result, Info, InvalidBaseOK);
+ }
+
+ bool evaluatePointer(const Expr *E, LValue &Result) {
+ return EvaluatePointer(E, Result, Info, InvalidBaseOK);
+ }
+
+ bool visitNonBuiltinCallExpr(const CallExpr *E);
public:
- PointerExprEvaluator(EvalInfo &info, LValue &Result)
- : ExprEvaluatorBaseTy(info), Result(Result) {}
+ PointerExprEvaluator(EvalInfo &info, LValue &Result, bool InvalidBaseOK)
+ : ExprEvaluatorBaseTy(info), Result(Result),
+ InvalidBaseOK(InvalidBaseOK) {}
bool Success(const APValue &V, const Expr *E) {
Result.setFrom(Info.Ctx, V);
return true;
}
bool ZeroInitialization(const Expr *E) {
- return Success((Expr*)nullptr);
+ auto Offset = Info.Ctx.getTargetNullPointerValue(E->getType());
+ Result.set((Expr*)nullptr, 0, false, true, Offset);
+ return true;
}
bool VisitBinaryOperator(const BinaryOperator *E);
@@ -5032,6 +5391,7 @@ public:
bool VisitAddrLabelExpr(const AddrLabelExpr *E)
{ return Success(E); }
bool VisitCallExpr(const CallExpr *E);
+ bool VisitBuiltinCallExpr(const CallExpr *E, unsigned BuiltinOp);
bool VisitBlockExpr(const BlockExpr *E) {
if (!E->getBlockDecl()->hasCaptures())
return Success(E);
@@ -5056,9 +5416,10 @@ public:
};
} // end anonymous namespace
-static bool EvaluatePointer(const Expr* E, LValue& Result, EvalInfo &Info) {
+static bool EvaluatePointer(const Expr* E, LValue& Result, EvalInfo &Info,
+ bool InvalidBaseOK) {
assert(E->isRValue() && E->getType()->hasPointerRepresentation());
- return PointerExprEvaluator(Info, Result).Visit(E);
+ return PointerExprEvaluator(Info, Result, InvalidBaseOK).Visit(E);
}
bool PointerExprEvaluator::VisitBinaryOperator(const BinaryOperator *E) {
@@ -5071,7 +5432,7 @@ bool PointerExprEvaluator::VisitBinaryOperator(const BinaryOperator *E) {
if (IExp->getType()->isPointerType())
std::swap(PExp, IExp);
- bool EvalPtrOK = EvaluatePointer(PExp, Result, Info);
+ bool EvalPtrOK = evaluatePointer(PExp, Result);
if (!EvalPtrOK && !Info.noteFailure())
return false;
@@ -5089,7 +5450,7 @@ bool PointerExprEvaluator::VisitBinaryOperator(const BinaryOperator *E) {
}
bool PointerExprEvaluator::VisitUnaryAddrOf(const UnaryOperator *E) {
- return EvaluateLValue(E->getSubExpr(), Result, Info);
+ return evaluateLValue(E->getSubExpr(), Result);
}
bool PointerExprEvaluator::VisitCastExpr(const CastExpr* E) {
@@ -5117,11 +5478,13 @@ bool PointerExprEvaluator::VisitCastExpr(const CastExpr* E) {
else
CCEDiag(E, diag::note_constexpr_invalid_cast) << 2;
}
+ if (E->getCastKind() == CK_AddressSpaceConversion && Result.IsNullPtr)
+ ZeroInitialization(E);
return true;
case CK_DerivedToBase:
case CK_UncheckedDerivedToBase:
- if (!EvaluatePointer(E->getSubExpr(), Result, Info))
+ if (!evaluatePointer(E->getSubExpr(), Result))
return false;
if (!Result.Base && Result.Offset.isZero())
return true;
@@ -5158,6 +5521,7 @@ bool PointerExprEvaluator::VisitCastExpr(const CastExpr* E) {
Result.Offset = CharUnits::fromQuantity(N);
Result.CallIndex = 0;
Result.Designator.setInvalid();
+ Result.IsNullPtr = false;
return true;
} else {
// Cast is of an lvalue, no need to change value.
@@ -5167,7 +5531,7 @@ bool PointerExprEvaluator::VisitCastExpr(const CastExpr* E) {
}
case CK_ArrayToPointerDecay:
if (SubExpr->isGLValue()) {
- if (!EvaluateLValue(SubExpr, Result, Info))
+ if (!evaluateLValue(SubExpr, Result))
return false;
} else {
Result.set(SubExpr, Info.CurrentCall->Index);
@@ -5184,7 +5548,21 @@ bool PointerExprEvaluator::VisitCastExpr(const CastExpr* E) {
return true;
case CK_FunctionToPointerDecay:
- return EvaluateLValue(SubExpr, Result, Info);
+ return evaluateLValue(SubExpr, Result);
+
+ case CK_LValueToRValue: {
+ LValue LVal;
+ if (!evaluateLValue(E->getSubExpr(), LVal))
+ return false;
+
+ APValue RVal;
+ // Note, we use the subexpression's type in order to retain cv-qualifiers.
+ if (!handleLValueToRValueConversion(Info, E, E->getSubExpr()->getType(),
+ LVal, RVal))
+ return InvalidBaseOK &&
+ evaluateLValueAsAllocSize(Info, LVal.Base, Result);
+ return Success(RVal, E);
+ }
}
return ExprEvaluatorBaseTy::VisitCastExpr(E);
@@ -5222,18 +5600,40 @@ static CharUnits GetAlignOfExpr(EvalInfo &Info, const Expr *E) {
return GetAlignOfType(Info, E->getType());
}
+// To be clear: this happily visits unsupported builtins. Better name welcomed.
+bool PointerExprEvaluator::visitNonBuiltinCallExpr(const CallExpr *E) {
+ if (ExprEvaluatorBaseTy::VisitCallExpr(E))
+ return true;
+
+ if (!(InvalidBaseOK && getAllocSizeAttr(E)))
+ return false;
+
+ Result.setInvalid(E);
+ QualType PointeeTy = E->getType()->castAs<PointerType>()->getPointeeType();
+ Result.addUnsizedArray(Info, PointeeTy);
+ return true;
+}
+
bool PointerExprEvaluator::VisitCallExpr(const CallExpr *E) {
if (IsStringLiteralCall(E))
return Success(E);
- switch (E->getBuiltinCallee()) {
+ if (unsigned BuiltinOp = E->getBuiltinCallee())
+ return VisitBuiltinCallExpr(E, BuiltinOp);
+
+ return visitNonBuiltinCallExpr(E);
+}
+
+bool PointerExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E,
+ unsigned BuiltinOp) {
+ switch (BuiltinOp) {
case Builtin::BI__builtin_addressof:
- return EvaluateLValue(E->getArg(0), Result, Info);
+ return evaluateLValue(E->getArg(0), Result);
case Builtin::BI__builtin_assume_aligned: {
// We need to be very careful here because: if the pointer does not have the
// asserted alignment, then the behavior is undefined, and undefined
// behavior is non-constant.
- if (!EvaluatePointer(E->getArg(0), Result, Info))
+ if (!evaluatePointer(E->getArg(0), Result))
return false;
LValue OffsetResult(Result);
@@ -5264,8 +5664,8 @@ bool PointerExprEvaluator::VisitCallExpr(const CallExpr *E) {
if (BaseAlignment < Align) {
Result.Designator.setInvalid();
- // FIXME: Quantities here cast to integers because the plural modifier
- // does not work on APSInts yet.
+ // FIXME: Quantities here cast to integers because the plural modifier
+ // does not work on APSInts yet.
CCEDiag(E->getArg(0),
diag::note_constexpr_baa_insufficient_alignment) << 0
<< (int) BaseAlignment.getQuantity()
@@ -5294,8 +5694,95 @@ bool PointerExprEvaluator::VisitCallExpr(const CallExpr *E) {
return true;
}
+
+ case Builtin::BIstrchr:
+ case Builtin::BIwcschr:
+ case Builtin::BImemchr:
+ case Builtin::BIwmemchr:
+ if (Info.getLangOpts().CPlusPlus11)
+ Info.CCEDiag(E, diag::note_constexpr_invalid_function)
+ << /*isConstexpr*/0 << /*isConstructor*/0
+ << (std::string("'") + Info.Ctx.BuiltinInfo.getName(BuiltinOp) + "'");
+ else
+ Info.CCEDiag(E, diag::note_invalid_subexpr_in_const_expr);
+ // Fall through.
+ case Builtin::BI__builtin_strchr:
+ case Builtin::BI__builtin_wcschr:
+ case Builtin::BI__builtin_memchr:
+ case Builtin::BI__builtin_char_memchr:
+ case Builtin::BI__builtin_wmemchr: {
+ if (!Visit(E->getArg(0)))
+ return false;
+ APSInt Desired;
+ if (!EvaluateInteger(E->getArg(1), Desired, Info))
+ return false;
+ uint64_t MaxLength = uint64_t(-1);
+ if (BuiltinOp != Builtin::BIstrchr &&
+ BuiltinOp != Builtin::BIwcschr &&
+ BuiltinOp != Builtin::BI__builtin_strchr &&
+ BuiltinOp != Builtin::BI__builtin_wcschr) {
+ APSInt N;
+ if (!EvaluateInteger(E->getArg(2), N, Info))
+ return false;
+ MaxLength = N.getExtValue();
+ }
+
+ QualType CharTy = E->getArg(0)->getType()->getPointeeType();
+
+ // Figure out what value we're actually looking for (after converting to
+ // the corresponding unsigned type if necessary).
+ uint64_t DesiredVal;
+ bool StopAtNull = false;
+ switch (BuiltinOp) {
+ case Builtin::BIstrchr:
+ case Builtin::BI__builtin_strchr:
+ // strchr compares directly to the passed integer, and therefore
+ // always fails if given an int that is not a char.
+ if (!APSInt::isSameValue(HandleIntToIntCast(Info, E, CharTy,
+ E->getArg(1)->getType(),
+ Desired),
+ Desired))
+ return ZeroInitialization(E);
+ StopAtNull = true;
+ // Fall through.
+ case Builtin::BImemchr:
+ case Builtin::BI__builtin_memchr:
+ case Builtin::BI__builtin_char_memchr:
+ // memchr compares by converting both sides to unsigned char. That's also
+ // correct for strchr if we get this far (to cope with plain char being
+ // unsigned in the strchr case).
+ DesiredVal = Desired.trunc(Info.Ctx.getCharWidth()).getZExtValue();
+ break;
+
+ case Builtin::BIwcschr:
+ case Builtin::BI__builtin_wcschr:
+ StopAtNull = true;
+ // Fall through.
+ case Builtin::BIwmemchr:
+ case Builtin::BI__builtin_wmemchr:
+ // wcschr and wmemchr are given a wchar_t to look for. Just use it.
+ DesiredVal = Desired.getZExtValue();
+ break;
+ }
+
+ for (; MaxLength; --MaxLength) {
+ APValue Char;
+ if (!handleLValueToRValueConversion(Info, E, CharTy, Result, Char) ||
+ !Char.isInt())
+ return false;
+ if (Char.getInt().getZExtValue() == DesiredVal)
+ return true;
+ if (StopAtNull && !Char.getInt())
+ break;
+ if (!HandleLValueArrayAdjustment(Info, E, Result, CharTy, 1))
+ return false;
+ }
+ // Not found: return nullptr.
+ return ZeroInitialization(E);
+ }
+
default:
- return ExprEvaluatorBaseTy::VisitCallExpr(E);
+ return visitNonBuiltinCallExpr(E);
}
}
@@ -5418,6 +5905,7 @@ namespace {
bool VisitCXXConstructExpr(const CXXConstructExpr *E) {
return VisitCXXConstructExpr(E, E->getType());
}
+ bool VisitLambdaExpr(const LambdaExpr *E);
bool VisitCXXInheritedCtorInitExpr(const CXXInheritedCtorInitExpr *E);
bool VisitCXXConstructExpr(const CXXConstructExpr *E, QualType T);
bool VisitCXXStdInitializerListExpr(const CXXStdInitializerListExpr *E);
@@ -5535,6 +6023,9 @@ bool RecordExprEvaluator::VisitCastExpr(const CastExpr *E) {
}
bool RecordExprEvaluator::VisitInitListExpr(const InitListExpr *E) {
+ if (E->isTransparent())
+ return Visit(E->getInit(0));
+
const RecordDecl *RD = E->getType()->castAs<RecordType>()->getDecl();
if (RD->isInvalidDecl()) return false;
const ASTRecordLayout &Layout = Info.Ctx.getASTRecordLayout(RD);
@@ -5749,6 +6240,21 @@ bool RecordExprEvaluator::VisitCXXStdInitializerListExpr(
return true;
}
+bool RecordExprEvaluator::VisitLambdaExpr(const LambdaExpr *E) {
+ const CXXRecordDecl *ClosureClass = E->getLambdaClass();
+ if (ClosureClass->isInvalidDecl()) return false;
+
+ if (Info.checkingPotentialConstantExpression()) return true;
+ if (E->capture_size()) {
+ Info.FFDiag(E, diag::note_unimplemented_constexpr_lambda_feature_ast)
+ << "can not evaluate lambda expressions with captures";
+ return false;
+ }
+ // FIXME: Implement captures.
+ Result = APValue(APValue::UninitStruct(), /*NumBases*/0, /*NumFields*/0);
+ return true;
+}
+
static bool EvaluateRecord(const Expr *E, const LValue &This,
APValue &Result, EvalInfo &Info) {
assert(E->isRValue() && E->getType()->isRecordType() &&
@@ -5768,7 +6274,7 @@ class TemporaryExprEvaluator
: public LValueExprEvaluatorBase<TemporaryExprEvaluator> {
public:
TemporaryExprEvaluator(EvalInfo &Info, LValue &Result) :
- LValueExprEvaluatorBaseTy(Info, Result) {}
+ LValueExprEvaluatorBaseTy(Info, Result, false) {}
/// Visit an expression which constructs the value of this temporary.
bool VisitConstructExpr(const Expr *E) {
@@ -5798,6 +6304,9 @@ public:
bool VisitCXXStdInitializerListExpr(const CXXStdInitializerListExpr *E) {
return VisitConstructExpr(E);
}
+ bool VisitLambdaExpr(const LambdaExpr *E) {
+ return VisitConstructExpr(E);
+ }
};
} // end anonymous namespace
@@ -5890,7 +6399,7 @@ bool VectorExprEvaluator::VisitCastExpr(const CastExpr *E) {
if (EltTy->isRealFloatingType()) {
const llvm::fltSemantics &Sem = Info.Ctx.getFloatTypeSemantics(EltTy);
unsigned FloatEltSize = EltSize;
- if (&Sem == &APFloat::x87DoubleExtended)
+ if (&Sem == &APFloat::x87DoubleExtended())
FloatEltSize = 80;
for (unsigned i = 0; i < NElts; i++) {
llvm::APInt Elt;
@@ -6030,6 +6539,7 @@ namespace {
return handleCallExpr(E, Result, &This);
}
bool VisitInitListExpr(const InitListExpr *E);
+ bool VisitArrayInitLoopExpr(const ArrayInitLoopExpr *E);
bool VisitCXXConstructExpr(const CXXConstructExpr *E);
bool VisitCXXConstructExpr(const CXXConstructExpr *E,
const LValue &Subobject,
@@ -6112,6 +6622,35 @@ bool ArrayExprEvaluator::VisitInitListExpr(const InitListExpr *E) {
FillerExpr) && Success;
}
+bool ArrayExprEvaluator::VisitArrayInitLoopExpr(const ArrayInitLoopExpr *E) {
+ if (E->getCommonExpr() &&
+ !Evaluate(Info.CurrentCall->createTemporary(E->getCommonExpr(), false),
+ Info, E->getCommonExpr()->getSourceExpr()))
+ return false;
+
+ auto *CAT = cast<ConstantArrayType>(E->getType()->castAsArrayTypeUnsafe());
+
+ uint64_t Elements = CAT->getSize().getZExtValue();
+ Result = APValue(APValue::UninitArray(), Elements, Elements);
+
+ LValue Subobject = This;
+ Subobject.addArray(Info, E, CAT);
+
+ bool Success = true;
+ for (EvalInfo::ArrayInitLoopIndex Index(Info); Index != Elements; ++Index) {
+ if (!EvaluateInPlace(Result.getArrayInitializedElt(Index),
+ Info, Subobject, E->getSubExpr()) ||
+ !HandleLValueArrayAdjustment(Info, E, Subobject,
+ CAT->getElementType(), 1)) {
+ if (!Info.noteFailure())
+ return false;
+ Success = false;
+ }
+ }
+
+ return Success;
+}
+
bool ArrayExprEvaluator::VisitCXXConstructExpr(const CXXConstructExpr *E) {
return VisitCXXConstructExpr(E, This, &Result, E->getType());
}
@@ -6252,6 +6791,7 @@ public:
}
bool VisitCallExpr(const CallExpr *E);
+ bool VisitBuiltinCallExpr(const CallExpr *E, unsigned BuiltinOp);
bool VisitBinaryOperator(const BinaryOperator *E);
bool VisitOffsetOfExpr(const OffsetOfExpr *E);
bool VisitUnaryOperator(const UnaryOperator *E);
@@ -6266,6 +6806,16 @@ public:
bool VisitObjCBoolLiteralExpr(const ObjCBoolLiteralExpr *E) {
return Success(E->getValue(), E);
}
+
+ bool VisitArrayInitIndexExpr(const ArrayInitIndexExpr *E) {
+ if (Info.ArrayInitIndex == uint64_t(-1)) {
+ // We were asked to evaluate this subexpression independent of the
+ // enclosing ArrayInitLoopExpr. We can't do that.
+ Info.FFDiag(E);
+ return false;
+ }
+ return Success(Info.ArrayInitIndex, E);
+ }
// Note, GNU defines __null as an integer, not a pointer.
bool VisitGNUNullExpr(const GNUNullExpr *E) {
@@ -6290,8 +6840,6 @@ public:
bool VisitCXXNoexceptExpr(const CXXNoexceptExpr *E);
bool VisitSizeOfPackExpr(const SizeOfPackExpr *E);
-private:
- bool TryEvaluateBuiltinObjectSize(const CallExpr *E, unsigned Type);
// FIXME: Missing: array subscript of vector, member of vector
};
} // end anonymous namespace
@@ -6563,7 +7111,7 @@ static QualType getObjectType(APValue::LValueBase B) {
}
/// A more selective version of E->IgnoreParenCasts for
-/// TryEvaluateBuiltinObjectSize. This ignores some casts/parens that serve only
+/// tryEvaluateBuiltinObjectSize. This ignores some casts/parens that serve only
/// to change the type of E.
/// Ex. For E = `(short*)((char*)(&foo))`, returns `&foo`
///
@@ -6630,82 +7178,197 @@ static bool isDesignatorAtObjectEnd(const ASTContext &Ctx, const LValue &LVal) {
}
}
+ unsigned I = 0;
QualType BaseType = getType(Base);
- for (int I = 0, E = LVal.Designator.Entries.size(); I != E; ++I) {
+ if (LVal.Designator.FirstEntryIsAnUnsizedArray) {
+ assert(isBaseAnAllocSizeCall(Base) &&
+ "Unsized array in non-alloc_size call?");
+ // If this is an alloc_size base, we should ignore the initial array index
+ ++I;
+ BaseType = BaseType->castAs<PointerType>()->getPointeeType();
+ }
+
+ for (unsigned E = LVal.Designator.Entries.size(); I != E; ++I) {
+ const auto &Entry = LVal.Designator.Entries[I];
if (BaseType->isArrayType()) {
// Because __builtin_object_size treats arrays as objects, we can ignore
// the index iff this is the last array in the Designator.
if (I + 1 == E)
return true;
- auto *CAT = cast<ConstantArrayType>(Ctx.getAsArrayType(BaseType));
- uint64_t Index = LVal.Designator.Entries[I].ArrayIndex;
+ const auto *CAT = cast<ConstantArrayType>(Ctx.getAsArrayType(BaseType));
+ uint64_t Index = Entry.ArrayIndex;
if (Index + 1 != CAT->getSize())
return false;
BaseType = CAT->getElementType();
} else if (BaseType->isAnyComplexType()) {
- auto *CT = BaseType->castAs<ComplexType>();
- uint64_t Index = LVal.Designator.Entries[I].ArrayIndex;
+ const auto *CT = BaseType->castAs<ComplexType>();
+ uint64_t Index = Entry.ArrayIndex;
if (Index != 1)
return false;
BaseType = CT->getElementType();
- } else if (auto *FD = getAsField(LVal.Designator.Entries[I])) {
+ } else if (auto *FD = getAsField(Entry)) {
bool Invalid;
if (!IsLastOrInvalidFieldDecl(FD, Invalid))
return Invalid;
BaseType = FD->getType();
} else {
- assert(getAsBaseClass(LVal.Designator.Entries[I]) != nullptr &&
- "Expecting cast to a base class");
+ assert(getAsBaseClass(Entry) && "Expecting cast to a base class");
return false;
}
}
return true;
}
-/// Tests to see if the LValue has a designator (that isn't necessarily valid).
+/// Tests to see if the LValue has a user-specified designator (that isn't
+/// necessarily valid). Note that this always returns 'true' if the LValue has
+/// an unsized array as its first designator entry, because there's currently no
+/// way to tell if the user typed *foo or foo[0].
static bool refersToCompleteObject(const LValue &LVal) {
- if (LVal.Designator.Invalid || !LVal.Designator.Entries.empty())
+ if (LVal.Designator.Invalid)
return false;
+ if (!LVal.Designator.Entries.empty())
+ return LVal.Designator.isMostDerivedAnUnsizedArray();
+
if (!LVal.InvalidBase)
return true;
- auto *E = LVal.Base.dyn_cast<const Expr *>();
- (void)E;
- assert(E != nullptr && isa<MemberExpr>(E));
- return false;
+ // If `E` is a MemberExpr, then the first part of the designator is hiding in
+ // the LValueBase.
+ const auto *E = LVal.Base.dyn_cast<const Expr *>();
+ return !E || !isa<MemberExpr>(E);
+}
+
+/// Attempts to detect a user writing into a piece of memory that's impossible
+/// to figure out the size of by just using types.
+static bool isUserWritingOffTheEnd(const ASTContext &Ctx, const LValue &LVal) {
+ const SubobjectDesignator &Designator = LVal.Designator;
+ // Notes:
+ // - Users can only write off of the end when we have an invalid base. Invalid
+ // bases imply we don't know where the memory came from.
+ // - We used to be a bit more aggressive here; we'd only be conservative if
+ // the array at the end was flexible, or if it had 0 or 1 elements. This
+ // broke some common standard library extensions (PR30346), but was
+ // otherwise seemingly fine. It may be useful to reintroduce this behavior
+ // with some sort of whitelist. OTOH, it seems that GCC is always
+ // conservative with the last element in structs (if it's an array), so our
+ // current behavior is more compatible than a whitelisting approach would
+ // be.
+ return LVal.InvalidBase &&
+ Designator.Entries.size() == Designator.MostDerivedPathLength &&
+ Designator.MostDerivedIsArrayElement &&
+ isDesignatorAtObjectEnd(Ctx, LVal);
+}
+
+/// Converts the given APInt to CharUnits, assuming the APInt is unsigned.
+/// Fails if the conversion would cause loss of precision.
+static bool convertUnsignedAPIntToCharUnits(const llvm::APInt &Int,
+ CharUnits &Result) {
+ auto CharUnitsMax = std::numeric_limits<CharUnits::QuantityType>::max();
+ if (Int.ugt(CharUnitsMax))
+ return false;
+ Result = CharUnits::fromQuantity(Int.getZExtValue());
+ return true;
}
-/// Tries to evaluate the __builtin_object_size for @p E. If successful, returns
-/// true and stores the result in @p Size.
+/// Helper for tryEvaluateBuiltinObjectSize -- Given an LValue, this will
+/// determine how many bytes exist from the beginning of the object to either
+/// the end of the current subobject, or the end of the object itself, depending
+/// on what the LValue looks like + the value of Type.
///
-/// If @p WasError is non-null, this will report whether the failure to evaluate
-/// is to be treated as an Error in IntExprEvaluator.
-static bool tryEvaluateBuiltinObjectSize(const Expr *E, unsigned Type,
- EvalInfo &Info, uint64_t &Size,
- bool *WasError = nullptr) {
- if (WasError != nullptr)
- *WasError = false;
-
- auto Error = [&](const Expr *E) {
- if (WasError != nullptr)
- *WasError = true;
- return false;
- };
+/// If this returns false, the value of Result is undefined.
+static bool determineEndOffset(EvalInfo &Info, SourceLocation ExprLoc,
+ unsigned Type, const LValue &LVal,
+ CharUnits &EndOffset) {
+ bool DetermineForCompleteObject = refersToCompleteObject(LVal);
- auto Success = [&](uint64_t S, const Expr *E) {
- Size = S;
- return true;
+ auto CheckedHandleSizeof = [&](QualType Ty, CharUnits &Result) {
+ if (Ty.isNull() || Ty->isIncompleteType() || Ty->isFunctionType())
+ return false;
+ return HandleSizeof(Info, ExprLoc, Ty, Result);
};
+ // We want to evaluate the size of the entire object. This is a valid fallback
+ // for when Type=1 and the designator is invalid, because we're asked for an
+ // upper-bound.
+ if (!(Type & 1) || LVal.Designator.Invalid || DetermineForCompleteObject) {
+ // Type=3 wants a lower bound, so we can't fall back to this.
+ if (Type == 3 && !DetermineForCompleteObject)
+ return false;
+
+ llvm::APInt APEndOffset;
+ if (isBaseAnAllocSizeCall(LVal.getLValueBase()) &&
+ getBytesReturnedByAllocSizeCall(Info.Ctx, LVal, APEndOffset))
+ return convertUnsignedAPIntToCharUnits(APEndOffset, EndOffset);
+
+ if (LVal.InvalidBase)
+ return false;
+
+ QualType BaseTy = getObjectType(LVal.getLValueBase());
+ return CheckedHandleSizeof(BaseTy, EndOffset);
+ }
+
+ // We want to evaluate the size of a subobject.
+ const SubobjectDesignator &Designator = LVal.Designator;
+
+ // The following is a moderately common idiom in C:
+ //
+ // struct Foo { int a; char c[1]; };
+ // struct Foo *F = (struct Foo *)malloc(sizeof(struct Foo) + strlen(Bar));
+ // strcpy(&F->c[0], Bar);
+ //
+ // In order to not break too much legacy code, we need to support it.
+ if (isUserWritingOffTheEnd(Info.Ctx, LVal)) {
+ // If we can resolve this to an alloc_size call, we can hand that back,
+ // because we know for certain how many bytes there are to write to.
+ llvm::APInt APEndOffset;
+ if (isBaseAnAllocSizeCall(LVal.getLValueBase()) &&
+ getBytesReturnedByAllocSizeCall(Info.Ctx, LVal, APEndOffset))
+ return convertUnsignedAPIntToCharUnits(APEndOffset, EndOffset);
+
+ // If we cannot determine the size of the initial allocation, then we can't
+ // given an accurate upper-bound. However, we are still able to give
+ // conservative lower-bounds for Type=3.
+ if (Type == 1)
+ return false;
+ }
+
+ CharUnits BytesPerElem;
+ if (!CheckedHandleSizeof(Designator.MostDerivedType, BytesPerElem))
+ return false;
+
+ // According to the GCC documentation, we want the size of the subobject
+ // denoted by the pointer. But that's not quite right -- what we actually
+ // want is the size of the immediately-enclosing array, if there is one.
+ int64_t ElemsRemaining;
+ if (Designator.MostDerivedIsArrayElement &&
+ Designator.Entries.size() == Designator.MostDerivedPathLength) {
+ uint64_t ArraySize = Designator.getMostDerivedArraySize();
+ uint64_t ArrayIndex = Designator.Entries.back().ArrayIndex;
+ ElemsRemaining = ArraySize <= ArrayIndex ? 0 : ArraySize - ArrayIndex;
+ } else {
+ ElemsRemaining = Designator.isOnePastTheEnd() ? 0 : 1;
+ }
+
+ EndOffset = LVal.getLValueOffset() + BytesPerElem * ElemsRemaining;
+ return true;
+}
+
+/// \brief Tries to evaluate the __builtin_object_size for @p E. If successful,
+/// returns true and stores the result in @p Size.
+///
+/// If @p WasError is non-null, this will report whether the failure to evaluate
+/// is to be treated as an Error in IntExprEvaluator.
+static bool tryEvaluateBuiltinObjectSize(const Expr *E, unsigned Type,
+ EvalInfo &Info, uint64_t &Size) {
// Determine the denoted object.
- LValue Base;
+ LValue LVal;
{
// The operand of __builtin_object_size is never evaluated for side-effects.
// If there are any, but we can determine the pointed-to object anyway, then
// ignore the side-effects.
SpeculativeEvaluationRAII SpeculativeEval(Info);
- FoldOffsetRAII Fold(Info, Type & 1);
+ FoldOffsetRAII Fold(Info);
if (E->isGLValue()) {
// It's possible for us to be given GLValues if we're called via
@@ -6713,118 +7376,41 @@ static bool tryEvaluateBuiltinObjectSize(const Expr *E, unsigned Type,
APValue RVal;
if (!EvaluateAsRValue(Info, E, RVal))
return false;
- Base.setFrom(Info.Ctx, RVal);
- } else if (!EvaluatePointer(ignorePointerCastsAndParens(E), Base, Info))
+ LVal.setFrom(Info.Ctx, RVal);
+ } else if (!EvaluatePointer(ignorePointerCastsAndParens(E), LVal, Info,
+ /*InvalidBaseOK=*/true))
return false;
}
- CharUnits BaseOffset = Base.getLValueOffset();
// If we point to before the start of the object, there are no accessible
// bytes.
- if (BaseOffset.isNegative())
- return Success(0, E);
-
- // In the case where we're not dealing with a subobject, we discard the
- // subobject bit.
- bool SubobjectOnly = (Type & 1) != 0 && !refersToCompleteObject(Base);
-
- // If Type & 1 is 0, we need to be able to statically guarantee that the bytes
- // exist. If we can't verify the base, then we can't do that.
- //
- // As a special case, we produce a valid object size for an unknown object
- // with a known designator if Type & 1 is 1. For instance:
- //
- // extern struct X { char buff[32]; int a, b, c; } *p;
- // int a = __builtin_object_size(p->buff + 4, 3); // returns 28
- // int b = __builtin_object_size(p->buff + 4, 2); // returns 0, not 40
- //
- // This matches GCC's behavior.
- if (Base.InvalidBase && !SubobjectOnly)
- return Error(E);
-
- // If we're not examining only the subobject, then we reset to a complete
- // object designator
- //
- // If Type is 1 and we've lost track of the subobject, just find the complete
- // object instead. (If Type is 3, that's not correct behavior and we should
- // return 0 instead.)
- LValue End = Base;
- if (!SubobjectOnly || (End.Designator.Invalid && Type == 1)) {
- QualType T = getObjectType(End.getLValueBase());
- if (T.isNull())
- End.Designator.setInvalid();
- else {
- End.Designator = SubobjectDesignator(T);
- End.Offset = CharUnits::Zero();
- }
+ if (LVal.getLValueOffset().isNegative()) {
+ Size = 0;
+ return true;
}
- // If it is not possible to determine which objects ptr points to at compile
- // time, __builtin_object_size should return (size_t) -1 for type 0 or 1
- // and (size_t) 0 for type 2 or 3.
- if (End.Designator.Invalid)
- return false;
-
- // According to the GCC documentation, we want the size of the subobject
- // denoted by the pointer. But that's not quite right -- what we actually
- // want is the size of the immediately-enclosing array, if there is one.
- int64_t AmountToAdd = 1;
- if (End.Designator.MostDerivedIsArrayElement &&
- End.Designator.Entries.size() == End.Designator.MostDerivedPathLength) {
- // We got a pointer to an array. Step to its end.
- AmountToAdd = End.Designator.MostDerivedArraySize -
- End.Designator.Entries.back().ArrayIndex;
- } else if (End.Designator.isOnePastTheEnd()) {
- // We're already pointing at the end of the object.
- AmountToAdd = 0;
- }
-
- QualType PointeeType = End.Designator.MostDerivedType;
- assert(!PointeeType.isNull());
- if (PointeeType->isIncompleteType() || PointeeType->isFunctionType())
- return Error(E);
-
- if (!HandleLValueArrayAdjustment(Info, E, End, End.Designator.MostDerivedType,
- AmountToAdd))
- return false;
-
- auto EndOffset = End.getLValueOffset();
-
- // The following is a moderately common idiom in C:
- //
- // struct Foo { int a; char c[1]; };
- // struct Foo *F = (struct Foo *)malloc(sizeof(struct Foo) + strlen(Bar));
- // strcpy(&F->c[0], Bar);
- //
- // So, if we see that we're examining a 1-length (or 0-length) array at the
- // end of a struct with an unknown base, we give up instead of breaking code
- // that behaves this way. Note that we only do this when Type=1, because
- // Type=3 is a lower bound, so answering conservatively is fine.
- if (End.InvalidBase && SubobjectOnly && Type == 1 &&
- End.Designator.Entries.size() == End.Designator.MostDerivedPathLength &&
- End.Designator.MostDerivedIsArrayElement &&
- End.Designator.MostDerivedArraySize < 2 &&
- isDesignatorAtObjectEnd(Info.Ctx, End))
+ CharUnits EndOffset;
+ if (!determineEndOffset(Info, E->getExprLoc(), Type, LVal, EndOffset))
return false;
- if (BaseOffset > EndOffset)
- return Success(0, E);
-
- return Success((EndOffset - BaseOffset).getQuantity(), E);
+ // If we've fallen outside of the end offset, just pretend there's nothing to
+ // write to/read from.
+ if (EndOffset <= LVal.getLValueOffset())
+ Size = 0;
+ else
+ Size = (EndOffset - LVal.getLValueOffset()).getQuantity();
+ return true;
}
-bool IntExprEvaluator::TryEvaluateBuiltinObjectSize(const CallExpr *E,
- unsigned Type) {
- uint64_t Size;
- bool WasError;
- if (::tryEvaluateBuiltinObjectSize(E->getArg(0), Type, Info, Size, &WasError))
- return Success(Size, E);
- if (WasError)
- return Error(E);
- return false;
+bool IntExprEvaluator::VisitCallExpr(const CallExpr *E) {
+ if (unsigned BuiltinOp = E->getBuiltinCallee())
+ return VisitBuiltinCallExpr(E, BuiltinOp);
+
+ return ExprEvaluatorBaseTy::VisitCallExpr(E);
}
-bool IntExprEvaluator::VisitCallExpr(const CallExpr *E) {
+bool IntExprEvaluator::VisitBuiltinCallExpr(const CallExpr *E,
+ unsigned BuiltinOp) {
switch (unsigned BuiltinOp = E->getBuiltinCallee()) {
default:
return ExprEvaluatorBaseTy::VisitCallExpr(E);
@@ -6835,8 +7421,9 @@ bool IntExprEvaluator::VisitCallExpr(const CallExpr *E) {
E->getArg(1)->EvaluateKnownConstInt(Info.Ctx).getZExtValue();
assert(Type <= 3 && "unexpected type");
- if (TryEvaluateBuiltinObjectSize(E, Type))
- return true;
+ uint64_t Size;
+ if (tryEvaluateBuiltinObjectSize(E->getArg(0), Type, Info, Size))
+ return Success(Size, E);
if (E->getArg(0)->HasSideEffects(Info.Ctx))
return Success((Type & 2) ? 0 : -1, E);
@@ -6849,7 +7436,7 @@ bool IntExprEvaluator::VisitCallExpr(const CallExpr *E) {
case EvalInfo::EM_ConstantFold:
case EvalInfo::EM_EvaluateForOverflow:
case EvalInfo::EM_IgnoreSideEffects:
- case EvalInfo::EM_DesignatorFold:
+ case EvalInfo::EM_OffsetFold:
// Leave it to IR generation.
return Error(E);
case EvalInfo::EM_ConstantExpressionUnevaluated:
@@ -6857,6 +7444,8 @@ bool IntExprEvaluator::VisitCallExpr(const CallExpr *E) {
// Reduce it to a constant now.
return Success((Type & 2) ? 0 : -1, E);
}
+
+ llvm_unreachable("unexpected EvalMode");
}
case Builtin::BI__builtin_bswap16:
@@ -6990,20 +7579,25 @@ bool IntExprEvaluator::VisitCallExpr(const CallExpr *E) {
}
case Builtin::BIstrlen:
+ case Builtin::BIwcslen:
// A call to strlen is not a constant expression.
if (Info.getLangOpts().CPlusPlus11)
Info.CCEDiag(E, diag::note_constexpr_invalid_function)
- << /*isConstexpr*/0 << /*isConstructor*/0 << "'strlen'";
+ << /*isConstexpr*/0 << /*isConstructor*/0
+ << (std::string("'") + Info.Ctx.BuiltinInfo.getName(BuiltinOp) + "'");
else
Info.CCEDiag(E, diag::note_invalid_subexpr_in_const_expr);
// Fall through.
- case Builtin::BI__builtin_strlen: {
+ case Builtin::BI__builtin_strlen:
+ case Builtin::BI__builtin_wcslen: {
// As an extension, we support __builtin_strlen() as a constant expression,
// and support folding strlen() to a constant.
LValue String;
if (!EvaluatePointer(E->getArg(0), String, Info))
return false;
+ QualType CharTy = E->getArg(0)->getType()->getPointeeType();
+
// Fast path: if it's a string literal, search the string value.
if (const StringLiteral *S = dyn_cast_or_null<StringLiteral>(
String.getLValueBase().dyn_cast<const Expr *>())) {
@@ -7012,7 +7606,9 @@ bool IntExprEvaluator::VisitCallExpr(const CallExpr *E) {
StringRef Str = S->getBytes();
int64_t Off = String.Offset.getQuantity();
if (Off >= 0 && (uint64_t)Off <= (uint64_t)Str.size() &&
- S->getCharByteWidth() == 1) {
+ S->getCharByteWidth() == 1 &&
+ // FIXME: Add fast-path for wchar_t too.
+ Info.Ctx.hasSameUnqualifiedType(CharTy, Info.Ctx.CharTy)) {
Str = Str.substr(Off);
StringRef::size_type Pos = Str.find(0);
@@ -7026,7 +7622,6 @@ bool IntExprEvaluator::VisitCallExpr(const CallExpr *E) {
}
// Slow path: scan the bytes of the string looking for the terminating 0.
- QualType CharTy = E->getArg(0)->getType()->getPointeeType();
for (uint64_t Strlen = 0; /**/; ++Strlen) {
APValue Char;
if (!handleLValueToRValueConversion(Info, E, CharTy, String, Char) ||
@@ -7039,6 +7634,66 @@ bool IntExprEvaluator::VisitCallExpr(const CallExpr *E) {
}
}
+ case Builtin::BIstrcmp:
+ case Builtin::BIwcscmp:
+ case Builtin::BIstrncmp:
+ case Builtin::BIwcsncmp:
+ case Builtin::BImemcmp:
+ case Builtin::BIwmemcmp:
+ // A call to strlen is not a constant expression.
+ if (Info.getLangOpts().CPlusPlus11)
+ Info.CCEDiag(E, diag::note_constexpr_invalid_function)
+ << /*isConstexpr*/0 << /*isConstructor*/0
+ << (std::string("'") + Info.Ctx.BuiltinInfo.getName(BuiltinOp) + "'");
+ else
+ Info.CCEDiag(E, diag::note_invalid_subexpr_in_const_expr);
+ // Fall through.
+ case Builtin::BI__builtin_strcmp:
+ case Builtin::BI__builtin_wcscmp:
+ case Builtin::BI__builtin_strncmp:
+ case Builtin::BI__builtin_wcsncmp:
+ case Builtin::BI__builtin_memcmp:
+ case Builtin::BI__builtin_wmemcmp: {
+ LValue String1, String2;
+ if (!EvaluatePointer(E->getArg(0), String1, Info) ||
+ !EvaluatePointer(E->getArg(1), String2, Info))
+ return false;
+
+ QualType CharTy = E->getArg(0)->getType()->getPointeeType();
+
+ uint64_t MaxLength = uint64_t(-1);
+ if (BuiltinOp != Builtin::BIstrcmp &&
+ BuiltinOp != Builtin::BIwcscmp &&
+ BuiltinOp != Builtin::BI__builtin_strcmp &&
+ BuiltinOp != Builtin::BI__builtin_wcscmp) {
+ APSInt N;
+ if (!EvaluateInteger(E->getArg(2), N, Info))
+ return false;
+ MaxLength = N.getExtValue();
+ }
+ bool StopAtNull = (BuiltinOp != Builtin::BImemcmp &&
+ BuiltinOp != Builtin::BIwmemcmp &&
+ BuiltinOp != Builtin::BI__builtin_memcmp &&
+ BuiltinOp != Builtin::BI__builtin_wmemcmp);
+ for (; MaxLength; --MaxLength) {
+ APValue Char1, Char2;
+ if (!handleLValueToRValueConversion(Info, E, CharTy, String1, Char1) ||
+ !handleLValueToRValueConversion(Info, E, CharTy, String2, Char2) ||
+ !Char1.isInt() || !Char2.isInt())
+ return false;
+ if (Char1.getInt() != Char2.getInt())
+ return Success(Char1.getInt() < Char2.getInt() ? -1 : 1, E);
+ if (StopAtNull && !Char1.getInt())
+ return Success(0, E);
+ assert(!(StopAtNull && !Char2.getInt()));
+ if (!HandleLValueArrayAdjustment(Info, E, String1, CharTy, 1) ||
+ !HandleLValueArrayAdjustment(Info, E, String2, CharTy, 1))
+ return false;
+ }
+ // We hit the strncmp / memcmp limit.
+ return Success(0, E);
+ }
+
case Builtin::BI__atomic_always_lock_free:
case Builtin::BI__atomic_is_lock_free:
case Builtin::BI__c11_atomic_is_lock_free: {
@@ -7160,9 +7815,7 @@ class DataRecursiveIntBinOpEvaluator {
enum { AnyExprKind, BinOpKind, BinOpVisitedLHSKind } Kind;
Job() = default;
- Job(Job &&J)
- : E(J.E), LHSResult(J.LHSResult), Kind(J.Kind),
- SpecEvalRAII(std::move(J.SpecEvalRAII)) {}
+ Job(Job &&) = default;
void startSpeculativeEval(EvalInfo &Info) {
SpecEvalRAII = SpeculativeEvaluationRAII(Info);
@@ -8037,8 +8690,10 @@ bool IntExprEvaluator::VisitCastExpr(const CastExpr *E) {
case CK_IntegralComplexToFloatingComplex:
case CK_BuiltinFnToFnPtr:
case CK_ZeroToOCLEvent:
+ case CK_ZeroToOCLQueue:
case CK_NonAtomicToAtomic:
case CK_AddressSpaceConversion:
+ case CK_IntToOCLSampler:
llvm_unreachable("invalid cast kind for integral value");
case CK_BitCast:
@@ -8113,8 +8768,13 @@ bool IntExprEvaluator::VisitCastExpr(const CastExpr *E) {
return true;
}
- APSInt AsInt = Info.Ctx.MakeIntValue(LV.getLValueOffset().getQuantity(),
- SrcType);
+ uint64_t V;
+ if (LV.isNullPointer())
+ V = Info.Ctx.getTargetNullPointerValue(SrcType);
+ else
+ V = LV.getLValueOffset().getQuantity();
+
+ APSInt AsInt = Info.Ctx.MakeIntValue(V, SrcType);
return Success(HandleIntToIntCast(Info, E, DestType, SrcType, AsInt), E);
}
@@ -8528,8 +9188,10 @@ bool ComplexExprEvaluator::VisitCastExpr(const CastExpr *E) {
case CK_CopyAndAutoreleaseBlockObject:
case CK_BuiltinFnToFnPtr:
case CK_ZeroToOCLEvent:
+ case CK_ZeroToOCLQueue:
case CK_NonAtomicToAtomic:
case CK_AddressSpaceConversion:
+ case CK_IntToOCLSampler:
llvm_unreachable("invalid cast kind for complex value");
case CK_LValueToRValue:
@@ -9341,6 +10003,8 @@ static ICEDiag CheckICE(const Expr* E, const ASTContext &Ctx) {
case Expr::CompoundLiteralExprClass:
case Expr::ExtVectorElementExprClass:
case Expr::DesignatedInitExprClass:
+ case Expr::ArrayInitLoopExprClass:
+ case Expr::ArrayInitIndexExprClass:
case Expr::NoInitExprClass:
case Expr::DesignatedInitUpdateExprClass:
case Expr::ImplicitValueInitExprClass:
@@ -9784,10 +10448,25 @@ bool Expr::isCXX11ConstantExpr(const ASTContext &Ctx, APValue *Result,
bool Expr::EvaluateWithSubstitution(APValue &Value, ASTContext &Ctx,
const FunctionDecl *Callee,
- ArrayRef<const Expr*> Args) const {
+ ArrayRef<const Expr*> Args,
+ const Expr *This) const {
Expr::EvalStatus Status;
EvalInfo Info(Ctx, Status, EvalInfo::EM_ConstantExpressionUnevaluated);
+ LValue ThisVal;
+ const LValue *ThisPtr = nullptr;
+ if (This) {
+#ifndef NDEBUG
+ auto *MD = dyn_cast<CXXMethodDecl>(Callee);
+ assert(MD && "Don't provide `this` for non-methods.");
+ assert(!MD->isStatic() && "Don't provide `this` for static methods.");
+#endif
+ if (EvaluateObjectArgument(Info, This, ThisVal))
+ ThisPtr = &ThisVal;
+ if (Info.EvalStatus.HasSideEffects)
+ return false;
+ }
+
ArgVector ArgValues(Args.size());
for (ArrayRef<const Expr*>::iterator I = Args.begin(), E = Args.end();
I != E; ++I) {
@@ -9800,7 +10479,7 @@ bool Expr::EvaluateWithSubstitution(APValue &Value, ASTContext &Ctx,
}
// Build fake call to Callee.
- CallStackFrame Frame(Info, Callee->getLocation(), Callee, /*This*/nullptr,
+ CallStackFrame Frame(Info, Callee->getLocation(), Callee, ThisPtr,
ArgValues.data());
return Evaluate(Value, Info, this) && !Info.EvalStatus.HasSideEffects;
}
@@ -9877,5 +10556,5 @@ bool Expr::tryEvaluateObjectSize(uint64_t &Result, ASTContext &Ctx,
Expr::EvalStatus Status;
EvalInfo Info(Ctx, Status, EvalInfo::EM_ConstantFold);
- return ::tryEvaluateBuiltinObjectSize(this, Type, Info, Result);
+ return tryEvaluateBuiltinObjectSize(this, Type, Info, Result);
}
OpenPOWER on IntegriCloud