From 60b571e49a90d38697b3aca23020d9da42fc7d7f Mon Sep 17 00:00:00 2001 From: dim Date: Sun, 2 Apr 2017 17:24:58 +0000 Subject: Update clang, llvm, lld, lldb, compiler-rt and libc++ to 4.0.0 release: MFC r309142 (by emaste): Add WITH_LLD_AS_LD build knob If set it installs LLD as /usr/bin/ld. LLD (as of version 3.9) is not capable of linking the world and kernel, but can self-host and link many substantial applications. GNU ld continues to be used for the world and kernel build, regardless of how this knob is set. It is on by default for arm64, and off for all other CPU architectures. Sponsored by: The FreeBSD Foundation MFC r310840: Reapply 310775, now it also builds correctly if lldb is disabled: Move llvm-objdump from CLANG_EXTRAS to installed by default We currently install three tools from binutils 2.17.50: as, ld, and objdump. Work is underway to migrate to a permissively-licensed tool-chain, with one goal being the retirement of binutils 2.17.50. LLVM's llvm-objdump is intended to be compatible with GNU objdump although it is currently missing some options and may have formatting differences. Enable it by default for testing and further investigation. It may later be changed to install as /usr/bin/objdump, it becomes a fully viable replacement. Reviewed by: emaste Differential Revision: https://reviews.freebsd.org/D8879 MFC r312855 (by emaste): Rename LLD_AS_LD to LLD_IS_LD, for consistency with CLANG_IS_CC Reported by: Dan McGregor MFC r313559 | glebius | 2017-02-10 18:34:48 +0100 (Fri, 10 Feb 2017) | 5 lines Don't check struct rtentry on FreeBSD, it is an internal kernel structure. On other systems it may be API structure for SIOCADDRT/SIOCDELRT. Reviewed by: emaste, dim MFC r314152 (by jkim): Remove an assembler flag, which is redundant since r309124. The upstream took care of it by introducing a macro NO_EXEC_STACK_DIRECTIVE. http://llvm.org/viewvc/llvm-project?rev=273500&view=rev Reviewed by: dim MFC r314564: Upgrade our copies of clang, llvm, lld, lldb, compiler-rt and libc++ to 4.0.0 (branches/release_40 296509). The release will follow soon. Please note that from 3.5.0 onwards, clang, llvm and lldb require C++11 support to build; see UPDATING for more information. Also note that as of 4.0.0, lld should be able to link the base system on amd64 and aarch64. See the WITH_LLD_IS_LLD setting in src.conf(5). Though please be aware that this is work in progress. Release notes for llvm, clang and lld will be available here: Thanks to Ed Maste, Jan Beich, Antoine Brodin and Eric Fiselier for their help. Relnotes: yes Exp-run: antoine PR: 215969, 216008 MFC r314708: For now, revert r287232 from upstream llvm trunk (by Daniil Fukalov): [SCEV] limit recursion depth of CompareSCEVComplexity Summary: CompareSCEVComplexity goes too deep (50+ on a quite a big unrolled loop) and runs almost infinite time. Added cache of "equal" SCEV pairs to earlier cutoff of further estimation. Recursion depth limit was also introduced as a parameter. Reviewers: sanjoy Subscribers: mzolotukhin, tstellarAMD, llvm-commits Differential Revision: https://reviews.llvm.org/D26389 This commit is the cause of excessive compile times on skein_block.c (and possibly other files) during kernel builds on amd64. We never saw the problematic behavior described in this upstream commit, so for now it is better to revert it. An upstream bug has been filed here: https://bugs.llvm.org/show_bug.cgi?id=32142 Reported by: mjg MFC r314795: Reapply r287232 from upstream llvm trunk (by Daniil Fukalov): [SCEV] limit recursion depth of CompareSCEVComplexity Summary: CompareSCEVComplexity goes too deep (50+ on a quite a big unrolled loop) and runs almost infinite time. Added cache of "equal" SCEV pairs to earlier cutoff of further estimation. Recursion depth limit was also introduced as a parameter. Reviewers: sanjoy Subscribers: mzolotukhin, tstellarAMD, llvm-commits Differential Revision: https://reviews.llvm.org/D26389 Pull in r296992 from upstream llvm trunk (by Sanjoy Das): [SCEV] Decrease the recursion threshold for CompareValueComplexity Fixes PR32142. r287232 accidentally increased the recursion threshold for CompareValueComplexity from 2 to 32. This change reverses that change by introducing a separate flag for CompareValueComplexity's threshold. The latter revision fixes the excessive compile times for skein_block.c. MFC r314907 | mmel | 2017-03-08 12:40:27 +0100 (Wed, 08 Mar 2017) | 7 lines Unbreak ARMv6 world. The new compiler_rt library imported with clang 4.0.0 have several fatal issues (non-functional __udivsi3 for example) with ARM specific instrict functions. As temporary workaround, until upstream solve these problems, disable all thumb[1][2] related feature. MFC r315016: Update clang, llvm, lld, lldb, compiler-rt and libc++ to 4.0.0 release. We were already very close to the last release candidate, so this is a pretty minor update. Relnotes: yes MFC r316005: Revert r314907, and pull in r298713 from upstream compiler-rt trunk (by Weiming Zhao): builtins: Select correct code fragments when compiling for Thumb1/Thum2/ARM ISA. Summary: Value of __ARM_ARCH_ISA_THUMB isn't based on the actual compilation mode (-mthumb, -marm), it reflect's capability of given CPU. Due to this: - use __tbumb__ and __thumb2__ insteand of __ARM_ARCH_ISA_THUMB - use '.thumb' directive consistently in all affected files - decorate all thumb functions using DEFINE_COMPILERRT_THUMB_FUNCTION() --------- Note: This patch doesn't fix broken Thumb1 variant of __udivsi3 ! Reviewers: weimingz, rengolin, compnerd Subscribers: aemerson, dim Differential Revision: https://reviews.llvm.org/D30938 Discussed with: mmel --- contrib/llvm/lib/Support/APFloat.cpp | 1088 ++++++++++++++++++++-------------- 1 file changed, 636 insertions(+), 452 deletions(-) (limited to 'contrib/llvm/lib/Support/APFloat.cpp') diff --git a/contrib/llvm/lib/Support/APFloat.cpp b/contrib/llvm/lib/Support/APFloat.cpp index f9370b8..4cfbbf8 100644 --- a/contrib/llvm/lib/Support/APFloat.cpp +++ b/contrib/llvm/lib/Support/APFloat.cpp @@ -19,8 +19,10 @@ #include "llvm/ADT/Hashing.h" #include "llvm/ADT/StringExtras.h" #include "llvm/ADT/StringRef.h" +#include "llvm/Support/Debug.h" #include "llvm/Support/ErrorHandling.h" #include "llvm/Support/MathExtras.h" +#include "llvm/Support/raw_ostream.h" #include #include @@ -39,16 +41,15 @@ using namespace llvm; static_assert(integerPartWidth % 4 == 0, "Part width must be divisible by 4!"); namespace llvm { - /* Represents floating point arithmetic semantics. */ struct fltSemantics { /* The largest E such that 2^E is representable; this matches the definition of IEEE 754. */ - APFloat::ExponentType maxExponent; + APFloatBase::ExponentType maxExponent; /* The smallest E such that 2^E is a normalized number; this matches the definition of IEEE 754. */ - APFloat::ExponentType minExponent; + APFloatBase::ExponentType minExponent; /* Number of bits in the significand. This includes the integer bit. */ @@ -58,12 +59,12 @@ namespace llvm { unsigned int sizeInBits; }; - const fltSemantics APFloat::IEEEhalf = { 15, -14, 11, 16 }; - const fltSemantics APFloat::IEEEsingle = { 127, -126, 24, 32 }; - const fltSemantics APFloat::IEEEdouble = { 1023, -1022, 53, 64 }; - const fltSemantics APFloat::IEEEquad = { 16383, -16382, 113, 128 }; - const fltSemantics APFloat::x87DoubleExtended = { 16383, -16382, 64, 80 }; - const fltSemantics APFloat::Bogus = { 0, 0, 0, 0 }; + static const fltSemantics semIEEEhalf = {15, -14, 11, 16}; + static const fltSemantics semIEEEsingle = {127, -126, 24, 32}; + static const fltSemantics semIEEEdouble = {1023, -1022, 53, 64}; + static const fltSemantics semIEEEquad = {16383, -16382, 113, 128}; + static const fltSemantics semX87DoubleExtended = {16383, -16382, 64, 80}; + static const fltSemantics semBogus = {0, 0, 0, 0}; /* The PowerPC format consists of two doubles. It does not map cleanly onto the usual format above. It is approximated using twice the @@ -75,8 +76,45 @@ namespace llvm { compile-time arithmetic on PPC double-double numbers, it is not able to represent all possible values held by a PPC double-double number, for example: (long double) 1.0 + (long double) 0x1p-106 - Should this be replaced by a full emulation of PPC double-double? */ - const fltSemantics APFloat::PPCDoubleDouble = { 1023, -1022 + 53, 53 + 53, 128 }; + Should this be replaced by a full emulation of PPC double-double? + + Note: we need to make the value different from semBogus as otherwise + an unsafe optimization may collapse both values to a single address, + and we heavily rely on them having distinct addresses. */ + static const fltSemantics semPPCDoubleDouble = {-1, 0, 0, 0}; + + /* There are temporary semantics for the real PPCDoubleDouble implementation. + Currently, APFloat of PPCDoubleDouble holds one PPCDoubleDoubleImpl as the + high part of double double, and one IEEEdouble as the low part, so that + the old operations operate on PPCDoubleDoubleImpl, while the newly added + operations also populate the IEEEdouble. + + TODO: Once all functions support DoubleAPFloat mode, we'll change all + PPCDoubleDoubleImpl to IEEEdouble and remove PPCDoubleDoubleImpl. */ + static const fltSemantics semPPCDoubleDoubleImpl = {1023, -1022 + 53, 53 + 53, + 128}; + + const fltSemantics &APFloatBase::IEEEhalf() { + return semIEEEhalf; + } + const fltSemantics &APFloatBase::IEEEsingle() { + return semIEEEsingle; + } + const fltSemantics &APFloatBase::IEEEdouble() { + return semIEEEdouble; + } + const fltSemantics &APFloatBase::IEEEquad() { + return semIEEEquad; + } + const fltSemantics &APFloatBase::x87DoubleExtended() { + return semX87DoubleExtended; + } + const fltSemantics &APFloatBase::Bogus() { + return semBogus; + } + const fltSemantics &APFloatBase::PPCDoubleDouble() { + return semPPCDoubleDouble; + } /* A tight upper bound on number of parts required to hold the value pow(5, power) is @@ -94,6 +132,24 @@ namespace llvm { const unsigned int maxPowerOfFiveExponent = maxExponent + maxPrecision - 1; const unsigned int maxPowerOfFiveParts = 2 + ((maxPowerOfFiveExponent * 815) / (351 * integerPartWidth)); + + unsigned int APFloatBase::semanticsPrecision(const fltSemantics &semantics) { + return semantics.precision; + } + APFloatBase::ExponentType + APFloatBase::semanticsMaxExponent(const fltSemantics &semantics) { + return semantics.maxExponent; + } + APFloatBase::ExponentType + APFloatBase::semanticsMinExponent(const fltSemantics &semantics) { + return semantics.minExponent; + } + unsigned int APFloatBase::semanticsSizeInBits(const fltSemantics &semantics) { + return semantics.sizeInBits; + } + + unsigned APFloatBase::getSizeInBits(const fltSemantics &Sem) { + return Sem.sizeInBits; } /* A bunch of private, handy routines. */ @@ -576,10 +632,9 @@ writeSignedDecimal (char *dst, int value) return dst; } +namespace detail { /* Constructors. */ -void -APFloat::initialize(const fltSemantics *ourSemantics) -{ +void IEEEFloat::initialize(const fltSemantics *ourSemantics) { unsigned int count; semantics = ourSemantics; @@ -588,16 +643,12 @@ APFloat::initialize(const fltSemantics *ourSemantics) significand.parts = new integerPart[count]; } -void -APFloat::freeSignificand() -{ +void IEEEFloat::freeSignificand() { if (needsCleanup()) delete [] significand.parts; } -void -APFloat::assign(const APFloat &rhs) -{ +void IEEEFloat::assign(const IEEEFloat &rhs) { assert(semantics == rhs.semantics); sign = rhs.sign; @@ -607,9 +658,7 @@ APFloat::assign(const APFloat &rhs) copySignificand(rhs); } -void -APFloat::copySignificand(const APFloat &rhs) -{ +void IEEEFloat::copySignificand(const IEEEFloat &rhs) { assert(isFiniteNonZero() || category == fcNaN); assert(rhs.partCount() >= partCount()); @@ -620,8 +669,7 @@ APFloat::copySignificand(const APFloat &rhs) /* Make this number a NaN, with an arbitrary but deterministic value for the significand. If double or longer, this is a signalling NaN, which may not be ideal. If float, this is QNaN(0). */ -void APFloat::makeNaN(bool SNaN, bool Negative, const APInt *fill) -{ +void IEEEFloat::makeNaN(bool SNaN, bool Negative, const APInt *fill) { category = fcNaN; sign = Negative; @@ -663,20 +711,11 @@ void APFloat::makeNaN(bool SNaN, bool Negative, const APInt *fill) // For x87 extended precision, we want to make a NaN, not a // pseudo-NaN. Maybe we should expose the ability to make // pseudo-NaNs? - if (semantics == &APFloat::x87DoubleExtended) + if (semantics == &semX87DoubleExtended) APInt::tcSetBit(significand, QNaNBit + 1); } -APFloat APFloat::makeNaN(const fltSemantics &Sem, bool SNaN, bool Negative, - const APInt *fill) { - APFloat value(Sem, uninitialized); - value.makeNaN(SNaN, Negative, fill); - return value; -} - -APFloat & -APFloat::operator=(const APFloat &rhs) -{ +IEEEFloat &IEEEFloat::operator=(const IEEEFloat &rhs) { if (this != &rhs) { if (semantics != rhs.semantics) { freeSignificand(); @@ -688,8 +727,7 @@ APFloat::operator=(const APFloat &rhs) return *this; } -APFloat & -APFloat::operator=(APFloat &&rhs) { +IEEEFloat &IEEEFloat::operator=(IEEEFloat &&rhs) { freeSignificand(); semantics = rhs.semantics; @@ -698,19 +736,17 @@ APFloat::operator=(APFloat &&rhs) { category = rhs.category; sign = rhs.sign; - rhs.semantics = &Bogus; + rhs.semantics = &semBogus; return *this; } -bool -APFloat::isDenormal() const { +bool IEEEFloat::isDenormal() const { return isFiniteNonZero() && (exponent == semantics->minExponent) && (APInt::tcExtractBit(significandParts(), semantics->precision - 1) == 0); } -bool -APFloat::isSmallest() const { +bool IEEEFloat::isSmallest() const { // The smallest number by magnitude in our format will be the smallest // denormal, i.e. the floating point number with exponent being minimum // exponent and significand bitwise equal to 1 (i.e. with MSB equal to 0). @@ -718,7 +754,7 @@ APFloat::isSmallest() const { significandMSB() == 0; } -bool APFloat::isSignificandAllOnes() const { +bool IEEEFloat::isSignificandAllOnes() const { // Test if the significand excluding the integral bit is all ones. This allows // us to test for binade boundaries. const integerPart *Parts = significandParts(); @@ -740,7 +776,7 @@ bool APFloat::isSignificandAllOnes() const { return true; } -bool APFloat::isSignificandAllZeros() const { +bool IEEEFloat::isSignificandAllZeros() const { // Test if the significand excluding the integral bit is all zeros. This // allows us to test for binade boundaries. const integerPart *Parts = significandParts(); @@ -762,25 +798,22 @@ bool APFloat::isSignificandAllZeros() const { return true; } -bool -APFloat::isLargest() const { +bool IEEEFloat::isLargest() const { // The largest number by magnitude in our format will be the floating point // number with maximum exponent and with significand that is all ones. return isFiniteNonZero() && exponent == semantics->maxExponent && isSignificandAllOnes(); } -bool -APFloat::isInteger() const { +bool IEEEFloat::isInteger() const { // This could be made more efficient; I'm going for obviously correct. if (!isFinite()) return false; - APFloat truncated = *this; + IEEEFloat truncated = *this; truncated.roundToIntegral(rmTowardZero); return compare(truncated) == cmpEqual; } -bool -APFloat::bitwiseIsEqual(const APFloat &rhs) const { +bool IEEEFloat::bitwiseIsEqual(const IEEEFloat &rhs) const { if (this == &rhs) return true; if (semantics != rhs.semantics || @@ -797,7 +830,7 @@ APFloat::bitwiseIsEqual(const APFloat &rhs) const { rhs.significandParts()); } -APFloat::APFloat(const fltSemantics &ourSemantics, integerPart value) { +IEEEFloat::IEEEFloat(const fltSemantics &ourSemantics, integerPart value) { initialize(&ourSemantics); sign = 0; category = fcNormal; @@ -807,93 +840,54 @@ APFloat::APFloat(const fltSemantics &ourSemantics, integerPart value) { normalize(rmNearestTiesToEven, lfExactlyZero); } -APFloat::APFloat(const fltSemantics &ourSemantics) { +IEEEFloat::IEEEFloat(const fltSemantics &ourSemantics) { initialize(&ourSemantics); category = fcZero; sign = false; } -APFloat::APFloat(const fltSemantics &ourSemantics, uninitializedTag tag) { - // Allocates storage if necessary but does not initialize it. - initialize(&ourSemantics); -} +// Delegate to the previous constructor, because later copy constructor may +// actually inspects category, which can't be garbage. +IEEEFloat::IEEEFloat(const fltSemantics &ourSemantics, uninitializedTag tag) + : IEEEFloat(ourSemantics) {} -APFloat::APFloat(const fltSemantics &ourSemantics, StringRef text) { - initialize(&ourSemantics); - convertFromString(text, rmNearestTiesToEven); -} - -APFloat::APFloat(const APFloat &rhs) { +IEEEFloat::IEEEFloat(const IEEEFloat &rhs) { initialize(rhs.semantics); assign(rhs); } -APFloat::APFloat(APFloat &&rhs) : semantics(&Bogus) { +IEEEFloat::IEEEFloat(IEEEFloat &&rhs) : semantics(&semBogus) { *this = std::move(rhs); } -APFloat::~APFloat() -{ - freeSignificand(); -} +IEEEFloat::~IEEEFloat() { freeSignificand(); } // Profile - This method 'profiles' an APFloat for use with FoldingSet. -void APFloat::Profile(FoldingSetNodeID& ID) const { +void IEEEFloat::Profile(FoldingSetNodeID &ID) const { ID.Add(bitcastToAPInt()); } -unsigned int -APFloat::partCount() const -{ +unsigned int IEEEFloat::partCount() const { return partCountForBits(semantics->precision + 1); } -unsigned int -APFloat::semanticsPrecision(const fltSemantics &semantics) -{ - return semantics.precision; -} -APFloat::ExponentType -APFloat::semanticsMaxExponent(const fltSemantics &semantics) -{ - return semantics.maxExponent; -} -APFloat::ExponentType -APFloat::semanticsMinExponent(const fltSemantics &semantics) -{ - return semantics.minExponent; -} -unsigned int -APFloat::semanticsSizeInBits(const fltSemantics &semantics) -{ - return semantics.sizeInBits; -} - -const integerPart * -APFloat::significandParts() const -{ - return const_cast(this)->significandParts(); +const integerPart *IEEEFloat::significandParts() const { + return const_cast(this)->significandParts(); } -integerPart * -APFloat::significandParts() -{ +integerPart *IEEEFloat::significandParts() { if (partCount() > 1) return significand.parts; else return &significand.part; } -void -APFloat::zeroSignificand() -{ +void IEEEFloat::zeroSignificand() { APInt::tcSet(significandParts(), 0, partCount()); } /* Increment an fcNormal floating point number's significand. */ -void -APFloat::incrementSignificand() -{ +void IEEEFloat::incrementSignificand() { integerPart carry; carry = APInt::tcIncrement(significandParts(), partCount()); @@ -904,9 +898,7 @@ APFloat::incrementSignificand() } /* Add the significand of the RHS. Returns the carry flag. */ -integerPart -APFloat::addSignificand(const APFloat &rhs) -{ +integerPart IEEEFloat::addSignificand(const IEEEFloat &rhs) { integerPart *parts; parts = significandParts(); @@ -919,9 +911,8 @@ APFloat::addSignificand(const APFloat &rhs) /* Subtract the significand of the RHS with a borrow flag. Returns the borrow flag. */ -integerPart -APFloat::subtractSignificand(const APFloat &rhs, integerPart borrow) -{ +integerPart IEEEFloat::subtractSignificand(const IEEEFloat &rhs, + integerPart borrow) { integerPart *parts; parts = significandParts(); @@ -936,9 +927,8 @@ APFloat::subtractSignificand(const APFloat &rhs, integerPart borrow) /* Multiply the significand of the RHS. If ADDEND is non-NULL, add it on to the full-precision result of the multiplication. Returns the lost fraction. */ -lostFraction -APFloat::multiplySignificand(const APFloat &rhs, const APFloat *addend) -{ +lostFraction IEEEFloat::multiplySignificand(const IEEEFloat &rhs, + const IEEEFloat *addend) { unsigned int omsb; // One, not zero, based MSB. unsigned int partsCount, newPartsCount, precision; integerPart *lhsSignificand; @@ -1011,7 +1001,7 @@ APFloat::multiplySignificand(const APFloat &rhs, const APFloat *addend) significand.parts = fullSignificand; semantics = &extendedSemantics; - APFloat extendedAddend(*addend); + IEEEFloat extendedAddend(*addend); status = extendedAddend.convert(extendedSemantics, rmTowardZero, &ignored); assert(status == opOK); (void)status; @@ -1045,7 +1035,8 @@ APFloat::multiplySignificand(const APFloat &rhs, const APFloat *addend) // the radix point (i.e. "MSB . rest-significant-bits"). // // Note that the result is not normalized when "omsb < precision". So, the - // caller needs to call APFloat::normalize() if normalized value is expected. + // caller needs to call IEEEFloat::normalize() if normalized value is + // expected. if (omsb > precision) { unsigned int bits, significantParts; lostFraction lf; @@ -1066,9 +1057,7 @@ APFloat::multiplySignificand(const APFloat &rhs, const APFloat *addend) } /* Multiply the significands of LHS and RHS to DST. */ -lostFraction -APFloat::divideSignificand(const APFloat &rhs) -{ +lostFraction IEEEFloat::divideSignificand(const IEEEFloat &rhs) { unsigned int bit, i, partsCount; const integerPart *rhsSignificand; integerPart *lhsSignificand, *dividend, *divisor; @@ -1150,22 +1139,16 @@ APFloat::divideSignificand(const APFloat &rhs) return lost_fraction; } -unsigned int -APFloat::significandMSB() const -{ +unsigned int IEEEFloat::significandMSB() const { return APInt::tcMSB(significandParts(), partCount()); } -unsigned int -APFloat::significandLSB() const -{ +unsigned int IEEEFloat::significandLSB() const { return APInt::tcLSB(significandParts(), partCount()); } /* Note that a zero result is NOT normalized to fcZero. */ -lostFraction -APFloat::shiftSignificandRight(unsigned int bits) -{ +lostFraction IEEEFloat::shiftSignificandRight(unsigned int bits) { /* Our exponent should not overflow. */ assert((ExponentType) (exponent + bits) >= exponent); @@ -1175,9 +1158,7 @@ APFloat::shiftSignificandRight(unsigned int bits) } /* Shift the significand left BITS bits, subtract BITS from its exponent. */ -void -APFloat::shiftSignificandLeft(unsigned int bits) -{ +void IEEEFloat::shiftSignificandLeft(unsigned int bits) { assert(bits < semantics->precision); if (bits) { @@ -1190,9 +1171,8 @@ APFloat::shiftSignificandLeft(unsigned int bits) } } -APFloat::cmpResult -APFloat::compareAbsoluteValue(const APFloat &rhs) const -{ +IEEEFloat::cmpResult +IEEEFloat::compareAbsoluteValue(const IEEEFloat &rhs) const { int compare; assert(semantics == rhs.semantics); @@ -1217,9 +1197,7 @@ APFloat::compareAbsoluteValue(const APFloat &rhs) const /* Handle overflow. Sign is preserved. We either become infinity or the largest finite number. */ -APFloat::opStatus -APFloat::handleOverflow(roundingMode rounding_mode) -{ +IEEEFloat::opStatus IEEEFloat::handleOverflow(roundingMode rounding_mode) { /* Infinity? */ if (rounding_mode == rmNearestTiesToEven || rounding_mode == rmNearestTiesToAway || @@ -1243,11 +1221,9 @@ APFloat::handleOverflow(roundingMode rounding_mode) would need to be rounded away from zero (i.e., by increasing the signficand). This routine must work for fcZero of both signs, and fcNormal numbers. */ -bool -APFloat::roundAwayFromZero(roundingMode rounding_mode, - lostFraction lost_fraction, - unsigned int bit) const -{ +bool IEEEFloat::roundAwayFromZero(roundingMode rounding_mode, + lostFraction lost_fraction, + unsigned int bit) const { /* NaNs and infinities should not have lost fractions. */ assert(isFiniteNonZero() || category == fcZero); @@ -1280,10 +1256,8 @@ APFloat::roundAwayFromZero(roundingMode rounding_mode, llvm_unreachable("Invalid rounding mode found"); } -APFloat::opStatus -APFloat::normalize(roundingMode rounding_mode, - lostFraction lost_fraction) -{ +IEEEFloat::opStatus IEEEFloat::normalize(roundingMode rounding_mode, + lostFraction lost_fraction) { unsigned int omsb; /* One, not zero, based MSB. */ int exponentChange; @@ -1388,9 +1362,8 @@ APFloat::normalize(roundingMode rounding_mode, return (opStatus) (opUnderflow | opInexact); } -APFloat::opStatus -APFloat::addOrSubtractSpecials(const APFloat &rhs, bool subtract) -{ +IEEEFloat::opStatus IEEEFloat::addOrSubtractSpecials(const IEEEFloat &rhs, + bool subtract) { switch (PackCategoriesIntoKey(category, rhs.category)) { default: llvm_unreachable(nullptr); @@ -1445,9 +1418,8 @@ APFloat::addOrSubtractSpecials(const APFloat &rhs, bool subtract) } /* Add or subtract two normal numbers. */ -lostFraction -APFloat::addOrSubtractSignificand(const APFloat &rhs, bool subtract) -{ +lostFraction IEEEFloat::addOrSubtractSignificand(const IEEEFloat &rhs, + bool subtract) { integerPart carry; lostFraction lost_fraction; int bits; @@ -1461,7 +1433,7 @@ APFloat::addOrSubtractSignificand(const APFloat &rhs, bool subtract) /* Subtraction is more subtle than one might naively expect. */ if (subtract) { - APFloat temp_rhs(rhs); + IEEEFloat temp_rhs(rhs); bool reverse; if (bits == 0) { @@ -1500,7 +1472,7 @@ APFloat::addOrSubtractSignificand(const APFloat &rhs, bool subtract) (void)carry; } else { if (bits > 0) { - APFloat temp_rhs(rhs); + IEEEFloat temp_rhs(rhs); lost_fraction = temp_rhs.shiftSignificandRight(bits); carry = addSignificand(temp_rhs); @@ -1517,9 +1489,7 @@ APFloat::addOrSubtractSignificand(const APFloat &rhs, bool subtract) return lost_fraction; } -APFloat::opStatus -APFloat::multiplySpecials(const APFloat &rhs) -{ +IEEEFloat::opStatus IEEEFloat::multiplySpecials(const IEEEFloat &rhs) { switch (PackCategoriesIntoKey(category, rhs.category)) { default: llvm_unreachable(nullptr); @@ -1561,9 +1531,7 @@ APFloat::multiplySpecials(const APFloat &rhs) } } -APFloat::opStatus -APFloat::divideSpecials(const APFloat &rhs) -{ +IEEEFloat::opStatus IEEEFloat::divideSpecials(const IEEEFloat &rhs) { switch (PackCategoriesIntoKey(category, rhs.category)) { default: llvm_unreachable(nullptr); @@ -1602,9 +1570,7 @@ APFloat::divideSpecials(const APFloat &rhs) } } -APFloat::opStatus -APFloat::modSpecials(const APFloat &rhs) -{ +IEEEFloat::opStatus IEEEFloat::modSpecials(const IEEEFloat &rhs) { switch (PackCategoriesIntoKey(category, rhs.category)) { default: llvm_unreachable(nullptr); @@ -1640,32 +1606,25 @@ APFloat::modSpecials(const APFloat &rhs) } /* Change sign. */ -void -APFloat::changeSign() -{ +void IEEEFloat::changeSign() { /* Look mummy, this one's easy. */ sign = !sign; } -void -APFloat::clearSign() -{ +void IEEEFloat::clearSign() { /* So is this one. */ sign = 0; } -void -APFloat::copySign(const APFloat &rhs) -{ +void IEEEFloat::copySign(const IEEEFloat &rhs) { /* And this one. */ sign = rhs.sign; } /* Normalized addition or subtraction. */ -APFloat::opStatus -APFloat::addOrSubtract(const APFloat &rhs, roundingMode rounding_mode, - bool subtract) -{ +IEEEFloat::opStatus IEEEFloat::addOrSubtract(const IEEEFloat &rhs, + roundingMode rounding_mode, + bool subtract) { opStatus fs; fs = addOrSubtractSpecials(rhs, subtract); @@ -1693,23 +1652,20 @@ APFloat::addOrSubtract(const APFloat &rhs, roundingMode rounding_mode, } /* Normalized addition. */ -APFloat::opStatus -APFloat::add(const APFloat &rhs, roundingMode rounding_mode) -{ +IEEEFloat::opStatus IEEEFloat::add(const IEEEFloat &rhs, + roundingMode rounding_mode) { return addOrSubtract(rhs, rounding_mode, false); } /* Normalized subtraction. */ -APFloat::opStatus -APFloat::subtract(const APFloat &rhs, roundingMode rounding_mode) -{ +IEEEFloat::opStatus IEEEFloat::subtract(const IEEEFloat &rhs, + roundingMode rounding_mode) { return addOrSubtract(rhs, rounding_mode, true); } /* Normalized multiply. */ -APFloat::opStatus -APFloat::multiply(const APFloat &rhs, roundingMode rounding_mode) -{ +IEEEFloat::opStatus IEEEFloat::multiply(const IEEEFloat &rhs, + roundingMode rounding_mode) { opStatus fs; sign ^= rhs.sign; @@ -1726,9 +1682,8 @@ APFloat::multiply(const APFloat &rhs, roundingMode rounding_mode) } /* Normalized divide. */ -APFloat::opStatus -APFloat::divide(const APFloat &rhs, roundingMode rounding_mode) -{ +IEEEFloat::opStatus IEEEFloat::divide(const IEEEFloat &rhs, + roundingMode rounding_mode) { opStatus fs; sign ^= rhs.sign; @@ -1745,11 +1700,9 @@ APFloat::divide(const APFloat &rhs, roundingMode rounding_mode) } /* Normalized remainder. This is not currently correct in all cases. */ -APFloat::opStatus -APFloat::remainder(const APFloat &rhs) -{ +IEEEFloat::opStatus IEEEFloat::remainder(const IEEEFloat &rhs) { opStatus fs; - APFloat V = *this; + IEEEFloat V = *this; unsigned int origSign = sign; fs = V.divide(rhs, rmNearestTiesToEven); @@ -1761,8 +1714,10 @@ APFloat::remainder(const APFloat &rhs) bool ignored; fs = V.convertToInteger(x, parts * integerPartWidth, true, rmNearestTiesToEven, &ignored); - if (fs==opInvalidOp) + if (fs==opInvalidOp) { + delete[] x; return fs; + } fs = V.convertFromZeroExtendedInteger(x, parts * integerPartWidth, true, rmNearestTiesToEven); @@ -1782,14 +1737,12 @@ APFloat::remainder(const APFloat &rhs) /* Normalized llvm frem (C fmod). This is not currently correct in all cases. */ -APFloat::opStatus -APFloat::mod(const APFloat &rhs) -{ +IEEEFloat::opStatus IEEEFloat::mod(const IEEEFloat &rhs) { opStatus fs; fs = modSpecials(rhs); if (isFiniteNonZero() && rhs.isFiniteNonZero()) { - APFloat V = *this; + IEEEFloat V = *this; unsigned int origSign = sign; fs = V.divide(rhs, rmNearestTiesToEven); @@ -1801,8 +1754,10 @@ APFloat::mod(const APFloat &rhs) bool ignored; fs = V.convertToInteger(x, parts * integerPartWidth, true, rmTowardZero, &ignored); - if (fs==opInvalidOp) + if (fs==opInvalidOp) { + delete[] x; return fs; + } fs = V.convertFromZeroExtendedInteger(x, parts * integerPartWidth, true, rmNearestTiesToEven); @@ -1822,11 +1777,9 @@ APFloat::mod(const APFloat &rhs) } /* Normalized fused-multiply-add. */ -APFloat::opStatus -APFloat::fusedMultiplyAdd(const APFloat &multiplicand, - const APFloat &addend, - roundingMode rounding_mode) -{ +IEEEFloat::opStatus IEEEFloat::fusedMultiplyAdd(const IEEEFloat &multiplicand, + const IEEEFloat &addend, + roundingMode rounding_mode) { opStatus fs; /* Post-multiplication sign, before addition. */ @@ -1867,7 +1820,7 @@ APFloat::fusedMultiplyAdd(const APFloat &multiplicand, } /* Rounding-mode corrrect round to integral value. */ -APFloat::opStatus APFloat::roundToIntegral(roundingMode rounding_mode) { +IEEEFloat::opStatus IEEEFloat::roundToIntegral(roundingMode rounding_mode) { opStatus fs; // If the exponent is large enough, we know that this value is already @@ -1884,7 +1837,7 @@ APFloat::opStatus APFloat::roundToIntegral(roundingMode rounding_mode) { // addition instead. APInt IntegerConstant(NextPowerOf2(semanticsPrecision(*semantics)), 1); IntegerConstant <<= semanticsPrecision(*semantics)-1; - APFloat MagicConstant(*semantics); + IEEEFloat MagicConstant(*semantics); fs = MagicConstant.convertFromAPInt(IntegerConstant, false, rmNearestTiesToEven); MagicConstant.copySign(*this); @@ -1910,9 +1863,7 @@ APFloat::opStatus APFloat::roundToIntegral(roundingMode rounding_mode) { /* Comparison requires normalized numbers. */ -APFloat::cmpResult -APFloat::compare(const APFloat &rhs) const -{ +IEEEFloat::cmpResult IEEEFloat::compare(const IEEEFloat &rhs) const { cmpResult result; assert(semantics == rhs.semantics); @@ -1982,17 +1933,16 @@ APFloat::compare(const APFloat &rhs) const return result; } -/// APFloat::convert - convert a value of one floating point type to another. +/// IEEEFloat::convert - convert a value of one floating point type to another. /// The return value corresponds to the IEEE754 exceptions. *losesInfo /// records whether the transformation lost information, i.e. whether /// converting the result back to the original type will produce the /// original value (this is almost the same as return value==fsOK, but there /// are edge cases where this is not so). -APFloat::opStatus -APFloat::convert(const fltSemantics &toSemantics, - roundingMode rounding_mode, bool *losesInfo) -{ +IEEEFloat::opStatus IEEEFloat::convert(const fltSemantics &toSemantics, + roundingMode rounding_mode, + bool *losesInfo) { lostFraction lostFraction; unsigned int newPartCount, oldPartCount; opStatus fs; @@ -2005,8 +1955,8 @@ APFloat::convert(const fltSemantics &toSemantics, shift = toSemantics.precision - fromSemantics.precision; bool X86SpecialNan = false; - if (&fromSemantics == &APFloat::x87DoubleExtended && - &toSemantics != &APFloat::x87DoubleExtended && category == fcNaN && + if (&fromSemantics == &semX87DoubleExtended && + &toSemantics != &semX87DoubleExtended && category == fcNaN && (!(*significandParts() & 0x8000000000000000ULL) || !(*significandParts() & 0x4000000000000000ULL))) { // x86 has some unusual NaNs which cannot be represented in any other @@ -2070,7 +2020,7 @@ APFloat::convert(const fltSemantics &toSemantics, // For x87 extended precision, we want to make a NaN, not a special NaN if // the input wasn't special either. - if (!X86SpecialNan && semantics == &APFloat::x87DoubleExtended) + if (!X86SpecialNan && semantics == &semX87DoubleExtended) APInt::tcSetBit(significandParts(), semantics->precision - 1); // gcc forces the Quiet bit on, which means (float)(double)(float_sNan) @@ -2096,12 +2046,9 @@ APFloat::convert(const fltSemantics &toSemantics, Note that for conversions to integer type the C standard requires round-to-zero to always be used. */ -APFloat::opStatus -APFloat::convertToSignExtendedInteger(integerPart *parts, unsigned int width, - bool isSigned, - roundingMode rounding_mode, - bool *isExact) const -{ +IEEEFloat::opStatus IEEEFloat::convertToSignExtendedInteger( + integerPart *parts, unsigned int width, bool isSigned, + roundingMode rounding_mode, bool *isExact) const { lostFraction lost_fraction; const integerPart *src; unsigned int dstPartsCount, truncatedBits; @@ -2208,11 +2155,11 @@ APFloat::convertToSignExtendedInteger(integerPart *parts, unsigned int width, the original value. This is almost equivalent to result==opOK, except for negative zeroes. */ -APFloat::opStatus -APFloat::convertToInteger(integerPart *parts, unsigned int width, - bool isSigned, - roundingMode rounding_mode, bool *isExact) const -{ +IEEEFloat::opStatus IEEEFloat::convertToInteger(integerPart *parts, + unsigned int width, + bool isSigned, + roundingMode rounding_mode, + bool *isExact) const { opStatus fs; fs = convertToSignExtendedInteger(parts, width, isSigned, rounding_mode, @@ -2242,10 +2189,9 @@ APFloat::convertToInteger(integerPart *parts, unsigned int width, an APSInt, whose initial bit-width and signed-ness are used to determine the precision of the conversion. */ -APFloat::opStatus -APFloat::convertToInteger(APSInt &result, - roundingMode rounding_mode, bool *isExact) const -{ +IEEEFloat::opStatus IEEEFloat::convertToInteger(APSInt &result, + roundingMode rounding_mode, + bool *isExact) const { unsigned bitWidth = result.getBitWidth(); SmallVector parts(result.getNumWords()); opStatus status = convertToInteger( @@ -2258,11 +2204,8 @@ APFloat::convertToInteger(APSInt &result, /* Convert an unsigned integer SRC to a floating point number, rounding according to ROUNDING_MODE. The sign of the floating point number is not modified. */ -APFloat::opStatus -APFloat::convertFromUnsignedParts(const integerPart *src, - unsigned int srcCount, - roundingMode rounding_mode) -{ +IEEEFloat::opStatus IEEEFloat::convertFromUnsignedParts( + const integerPart *src, unsigned int srcCount, roundingMode rounding_mode) { unsigned int omsb, precision, dstCount; integerPart *dst; lostFraction lost_fraction; @@ -2289,11 +2232,8 @@ APFloat::convertFromUnsignedParts(const integerPart *src, return normalize(rounding_mode, lost_fraction); } -APFloat::opStatus -APFloat::convertFromAPInt(const APInt &Val, - bool isSigned, - roundingMode rounding_mode) -{ +IEEEFloat::opStatus IEEEFloat::convertFromAPInt(const APInt &Val, bool isSigned, + roundingMode rounding_mode) { unsigned int partCount = Val.getNumWords(); APInt api = Val; @@ -2309,12 +2249,10 @@ APFloat::convertFromAPInt(const APInt &Val, /* Convert a two's complement integer SRC to a floating point number, rounding according to ROUNDING_MODE. ISSIGNED is true if the integer is signed, in which case it must be sign-extended. */ -APFloat::opStatus -APFloat::convertFromSignExtendedInteger(const integerPart *src, - unsigned int srcCount, - bool isSigned, - roundingMode rounding_mode) -{ +IEEEFloat::opStatus +IEEEFloat::convertFromSignExtendedInteger(const integerPart *src, + unsigned int srcCount, bool isSigned, + roundingMode rounding_mode) { opStatus status; if (isSigned && @@ -2337,11 +2275,10 @@ APFloat::convertFromSignExtendedInteger(const integerPart *src, } /* FIXME: should this just take a const APInt reference? */ -APFloat::opStatus -APFloat::convertFromZeroExtendedInteger(const integerPart *parts, - unsigned int width, bool isSigned, - roundingMode rounding_mode) -{ +IEEEFloat::opStatus +IEEEFloat::convertFromZeroExtendedInteger(const integerPart *parts, + unsigned int width, bool isSigned, + roundingMode rounding_mode) { unsigned int partCount = partCountForBits(width); APInt api = APInt(width, makeArrayRef(parts, partCount)); @@ -2354,9 +2291,9 @@ APFloat::convertFromZeroExtendedInteger(const integerPart *parts, return convertFromUnsignedParts(api.getRawData(), partCount, rounding_mode); } -APFloat::opStatus -APFloat::convertFromHexadecimalString(StringRef s, roundingMode rounding_mode) -{ +IEEEFloat::opStatus +IEEEFloat::convertFromHexadecimalString(StringRef s, + roundingMode rounding_mode) { lostFraction lost_fraction = lfExactlyZero; category = fcNormal; @@ -2434,11 +2371,10 @@ APFloat::convertFromHexadecimalString(StringRef s, roundingMode rounding_mode) return normalize(rounding_mode, lost_fraction); } -APFloat::opStatus -APFloat::roundSignificandWithExponent(const integerPart *decSigParts, - unsigned sigPartCount, int exp, - roundingMode rounding_mode) -{ +IEEEFloat::opStatus +IEEEFloat::roundSignificandWithExponent(const integerPart *decSigParts, + unsigned sigPartCount, int exp, + roundingMode rounding_mode) { unsigned int parts, pow5PartCount; fltSemantics calcSemantics = { 32767, -32767, 0, 0 }; integerPart pow5Parts[maxPowerOfFiveParts]; @@ -2460,8 +2396,9 @@ APFloat::roundSignificandWithExponent(const integerPart *decSigParts, excessPrecision = calcSemantics.precision - semantics->precision; truncatedBits = excessPrecision; - APFloat decSig = APFloat::getZero(calcSemantics, sign); - APFloat pow5(calcSemantics); + IEEEFloat decSig(calcSemantics, uninitialized); + decSig.makeZero(sign); + IEEEFloat pow5(calcSemantics); sigStatus = decSig.convertFromUnsignedParts(decSigParts, sigPartCount, rmNearestTiesToEven); @@ -2519,9 +2456,8 @@ APFloat::roundSignificandWithExponent(const integerPart *decSigParts, } } -APFloat::opStatus -APFloat::convertFromDecimalString(StringRef str, roundingMode rounding_mode) -{ +IEEEFloat::opStatus +IEEEFloat::convertFromDecimalString(StringRef str, roundingMode rounding_mode) { decimalInfo D; opStatus fs; @@ -2637,8 +2573,7 @@ APFloat::convertFromDecimalString(StringRef str, roundingMode rounding_mode) return fs; } -bool -APFloat::convertFromStringSpecials(StringRef str) { +bool IEEEFloat::convertFromStringSpecials(StringRef str) { if (str.equals("inf") || str.equals("INFINITY")) { makeInf(false); return true; @@ -2662,9 +2597,8 @@ APFloat::convertFromStringSpecials(StringRef str) { return false; } -APFloat::opStatus -APFloat::convertFromString(StringRef str, roundingMode rounding_mode) -{ +IEEEFloat::opStatus IEEEFloat::convertFromString(StringRef str, + roundingMode rounding_mode) { assert(!str.empty() && "Invalid string length"); // Handle special cases. @@ -2714,10 +2648,9 @@ APFloat::convertFromString(StringRef str, roundingMode rounding_mode) 1 (normal numbers) or 2 (normal numbers rounded-away-from-zero with any other digits zero). */ -unsigned int -APFloat::convertToHexString(char *dst, unsigned int hexDigits, - bool upperCase, roundingMode rounding_mode) const -{ +unsigned int IEEEFloat::convertToHexString(char *dst, unsigned int hexDigits, + bool upperCase, + roundingMode rounding_mode) const { char *p; p = dst; @@ -2762,11 +2695,9 @@ APFloat::convertToHexString(char *dst, unsigned int hexDigits, form of a normal floating point number with the specified number of hexadecimal digits. If HEXDIGITS is zero the minimum number of digits necessary to print the value precisely is output. */ -char * -APFloat::convertNormalToHexString(char *dst, unsigned int hexDigits, - bool upperCase, - roundingMode rounding_mode) const -{ +char *IEEEFloat::convertNormalToHexString(char *dst, unsigned int hexDigits, + bool upperCase, + roundingMode rounding_mode) const { unsigned int count, valueBits, shift, partsCount, outputDigits; const char *hexDigitChars; const integerPart *significand; @@ -2866,7 +2797,7 @@ APFloat::convertNormalToHexString(char *dst, unsigned int hexDigits, return writeSignedDecimal (dst, exponent); } -hash_code llvm::hash_value(const APFloat &Arg) { +hash_code hash_value(const IEEEFloat &Arg) { if (!Arg.isFiniteNonZero()) return hash_combine((uint8_t)Arg.category, // NaN has no sign, fix it at zero. @@ -2890,10 +2821,8 @@ hash_code llvm::hash_value(const APFloat &Arg) { // Denormals have exponent minExponent in APFloat, but minExponent-1 in // the actual IEEE respresentations. We compensate for that here. -APInt -APFloat::convertF80LongDoubleAPFloatToAPInt() const -{ - assert(semantics == (const llvm::fltSemantics*)&x87DoubleExtended); +APInt IEEEFloat::convertF80LongDoubleAPFloatToAPInt() const { + assert(semantics == (const llvm::fltSemantics*)&semX87DoubleExtended); assert(partCount()==2); uint64_t myexponent, mysignificand; @@ -2922,10 +2851,8 @@ APFloat::convertF80LongDoubleAPFloatToAPInt() const return APInt(80, words); } -APInt -APFloat::convertPPCDoubleDoubleAPFloatToAPInt() const -{ - assert(semantics == (const llvm::fltSemantics*)&PPCDoubleDouble); +APInt IEEEFloat::convertPPCDoubleDoubleAPFloatToAPInt() const { + assert(semantics == (const llvm::fltSemantics *)&semPPCDoubleDoubleImpl); assert(partCount()==2); uint64_t words[2]; @@ -2939,14 +2866,14 @@ APFloat::convertPPCDoubleDoubleAPFloatToAPInt() const // Declare fltSemantics before APFloat that uses it (and // saves pointer to it) to ensure correct destruction order. fltSemantics extendedSemantics = *semantics; - extendedSemantics.minExponent = IEEEdouble.minExponent; - APFloat extended(*this); + extendedSemantics.minExponent = semIEEEdouble.minExponent; + IEEEFloat extended(*this); fs = extended.convert(extendedSemantics, rmNearestTiesToEven, &losesInfo); assert(fs == opOK && !losesInfo); (void)fs; - APFloat u(extended); - fs = u.convert(IEEEdouble, rmNearestTiesToEven, &losesInfo); + IEEEFloat u(extended); + fs = u.convert(semIEEEdouble, rmNearestTiesToEven, &losesInfo); assert(fs == opOK || fs == opInexact); (void)fs; words[0] = *u.convertDoubleAPFloatToAPInt().getRawData(); @@ -2960,9 +2887,9 @@ APFloat::convertPPCDoubleDoubleAPFloatToAPInt() const assert(fs == opOK && !losesInfo); (void)fs; - APFloat v(extended); + IEEEFloat v(extended); v.subtract(u, rmNearestTiesToEven); - fs = v.convert(IEEEdouble, rmNearestTiesToEven, &losesInfo); + fs = v.convert(semIEEEdouble, rmNearestTiesToEven, &losesInfo); assert(fs == opOK && !losesInfo); (void)fs; words[1] = *v.convertDoubleAPFloatToAPInt().getRawData(); @@ -2973,10 +2900,8 @@ APFloat::convertPPCDoubleDoubleAPFloatToAPInt() const return APInt(128, words); } -APInt -APFloat::convertQuadrupleAPFloatToAPInt() const -{ - assert(semantics == (const llvm::fltSemantics*)&IEEEquad); +APInt IEEEFloat::convertQuadrupleAPFloatToAPInt() const { + assert(semantics == (const llvm::fltSemantics*)&semIEEEquad); assert(partCount()==2); uint64_t myexponent, mysignificand, mysignificand2; @@ -3009,10 +2934,8 @@ APFloat::convertQuadrupleAPFloatToAPInt() const return APInt(128, words); } -APInt -APFloat::convertDoubleAPFloatToAPInt() const -{ - assert(semantics == (const llvm::fltSemantics*)&IEEEdouble); +APInt IEEEFloat::convertDoubleAPFloatToAPInt() const { + assert(semantics == (const llvm::fltSemantics*)&semIEEEdouble); assert(partCount()==1); uint64_t myexponent, mysignificand; @@ -3039,10 +2962,8 @@ APFloat::convertDoubleAPFloatToAPInt() const (mysignificand & 0xfffffffffffffLL)))); } -APInt -APFloat::convertFloatAPFloatToAPInt() const -{ - assert(semantics == (const llvm::fltSemantics*)&IEEEsingle); +APInt IEEEFloat::convertFloatAPFloatToAPInt() const { + assert(semantics == (const llvm::fltSemantics*)&semIEEEsingle); assert(partCount()==1); uint32_t myexponent, mysignificand; @@ -3068,10 +2989,8 @@ APFloat::convertFloatAPFloatToAPInt() const (mysignificand & 0x7fffff))); } -APInt -APFloat::convertHalfAPFloatToAPInt() const -{ - assert(semantics == (const llvm::fltSemantics*)&IEEEhalf); +APInt IEEEFloat::convertHalfAPFloatToAPInt() const { + assert(semantics == (const llvm::fltSemantics*)&semIEEEhalf); assert(partCount()==1); uint32_t myexponent, mysignificand; @@ -3101,42 +3020,36 @@ APFloat::convertHalfAPFloatToAPInt() const // point constant as it would appear in memory. It is not a conversion, // and treating the result as a normal integer is unlikely to be useful. -APInt -APFloat::bitcastToAPInt() const -{ - if (semantics == (const llvm::fltSemantics*)&IEEEhalf) +APInt IEEEFloat::bitcastToAPInt() const { + if (semantics == (const llvm::fltSemantics*)&semIEEEhalf) return convertHalfAPFloatToAPInt(); - if (semantics == (const llvm::fltSemantics*)&IEEEsingle) + if (semantics == (const llvm::fltSemantics*)&semIEEEsingle) return convertFloatAPFloatToAPInt(); - if (semantics == (const llvm::fltSemantics*)&IEEEdouble) + if (semantics == (const llvm::fltSemantics*)&semIEEEdouble) return convertDoubleAPFloatToAPInt(); - if (semantics == (const llvm::fltSemantics*)&IEEEquad) + if (semantics == (const llvm::fltSemantics*)&semIEEEquad) return convertQuadrupleAPFloatToAPInt(); - if (semantics == (const llvm::fltSemantics*)&PPCDoubleDouble) + if (semantics == (const llvm::fltSemantics *)&semPPCDoubleDoubleImpl) return convertPPCDoubleDoubleAPFloatToAPInt(); - assert(semantics == (const llvm::fltSemantics*)&x87DoubleExtended && + assert(semantics == (const llvm::fltSemantics*)&semX87DoubleExtended && "unknown format!"); return convertF80LongDoubleAPFloatToAPInt(); } -float -APFloat::convertToFloat() const -{ - assert(semantics == (const llvm::fltSemantics*)&IEEEsingle && +float IEEEFloat::convertToFloat() const { + assert(semantics == (const llvm::fltSemantics*)&semIEEEsingle && "Float semantics are not IEEEsingle"); APInt api = bitcastToAPInt(); return api.bitsToFloat(); } -double -APFloat::convertToDouble() const -{ - assert(semantics == (const llvm::fltSemantics*)&IEEEdouble && +double IEEEFloat::convertToDouble() const { + assert(semantics == (const llvm::fltSemantics*)&semIEEEdouble && "Float semantics are not IEEEdouble"); APInt api = bitcastToAPInt(); return api.bitsToDouble(); @@ -3149,16 +3062,14 @@ APFloat::convertToDouble() const /// exponent = 0, integer bit 1 ("pseudodenormal") /// exponent!=0 nor all 1's, integer bit 0 ("unnormal") /// At the moment, the first two are treated as NaNs, the second two as Normal. -void -APFloat::initFromF80LongDoubleAPInt(const APInt &api) -{ +void IEEEFloat::initFromF80LongDoubleAPInt(const APInt &api) { assert(api.getBitWidth()==80); uint64_t i1 = api.getRawData()[0]; uint64_t i2 = api.getRawData()[1]; uint64_t myexponent = (i2 & 0x7fff); uint64_t mysignificand = i1; - initialize(&APFloat::x87DoubleExtended); + initialize(&semX87DoubleExtended); assert(partCount()==2); sign = static_cast(i2>>15); @@ -3183,9 +3094,7 @@ APFloat::initFromF80LongDoubleAPInt(const APInt &api) } } -void -APFloat::initFromPPCDoubleDoubleAPInt(const APInt &api) -{ +void IEEEFloat::initFromPPCDoubleDoubleAPInt(const APInt &api) { assert(api.getBitWidth()==128); uint64_t i1 = api.getRawData()[0]; uint64_t i2 = api.getRawData()[1]; @@ -3194,14 +3103,14 @@ APFloat::initFromPPCDoubleDoubleAPInt(const APInt &api) // Get the first double and convert to our format. initFromDoubleAPInt(APInt(64, i1)); - fs = convert(PPCDoubleDouble, rmNearestTiesToEven, &losesInfo); + fs = convert(semPPCDoubleDoubleImpl, rmNearestTiesToEven, &losesInfo); assert(fs == opOK && !losesInfo); (void)fs; // Unless we have a special case, add in second double. if (isFiniteNonZero()) { - APFloat v(IEEEdouble, APInt(64, i2)); - fs = v.convert(PPCDoubleDouble, rmNearestTiesToEven, &losesInfo); + IEEEFloat v(semIEEEdouble, APInt(64, i2)); + fs = v.convert(semPPCDoubleDoubleImpl, rmNearestTiesToEven, &losesInfo); assert(fs == opOK && !losesInfo); (void)fs; @@ -3209,9 +3118,7 @@ APFloat::initFromPPCDoubleDoubleAPInt(const APInt &api) } } -void -APFloat::initFromQuadrupleAPInt(const APInt &api) -{ +void IEEEFloat::initFromQuadrupleAPInt(const APInt &api) { assert(api.getBitWidth()==128); uint64_t i1 = api.getRawData()[0]; uint64_t i2 = api.getRawData()[1]; @@ -3219,7 +3126,7 @@ APFloat::initFromQuadrupleAPInt(const APInt &api) uint64_t mysignificand = i1; uint64_t mysignificand2 = i2 & 0xffffffffffffLL; - initialize(&APFloat::IEEEquad); + initialize(&semIEEEquad); assert(partCount()==2); sign = static_cast(i2>>63); @@ -3249,15 +3156,13 @@ APFloat::initFromQuadrupleAPInt(const APInt &api) } } -void -APFloat::initFromDoubleAPInt(const APInt &api) -{ +void IEEEFloat::initFromDoubleAPInt(const APInt &api) { assert(api.getBitWidth()==64); uint64_t i = *api.getRawData(); uint64_t myexponent = (i >> 52) & 0x7ff; uint64_t mysignificand = i & 0xfffffffffffffLL; - initialize(&APFloat::IEEEdouble); + initialize(&semIEEEdouble); assert(partCount()==1); sign = static_cast(i>>63); @@ -3282,15 +3187,13 @@ APFloat::initFromDoubleAPInt(const APInt &api) } } -void -APFloat::initFromFloatAPInt(const APInt & api) -{ +void IEEEFloat::initFromFloatAPInt(const APInt &api) { assert(api.getBitWidth()==32); uint32_t i = (uint32_t)*api.getRawData(); uint32_t myexponent = (i >> 23) & 0xff; uint32_t mysignificand = i & 0x7fffff; - initialize(&APFloat::IEEEsingle); + initialize(&semIEEEsingle); assert(partCount()==1); sign = i >> 31; @@ -3315,15 +3218,13 @@ APFloat::initFromFloatAPInt(const APInt & api) } } -void -APFloat::initFromHalfAPInt(const APInt & api) -{ +void IEEEFloat::initFromHalfAPInt(const APInt &api) { assert(api.getBitWidth()==16); uint32_t i = (uint32_t)*api.getRawData(); uint32_t myexponent = (i >> 10) & 0x1f; uint32_t mysignificand = i & 0x3ff; - initialize(&APFloat::IEEEhalf); + initialize(&semIEEEhalf); assert(partCount()==1); sign = i >> 15; @@ -3352,53 +3253,26 @@ APFloat::initFromHalfAPInt(const APInt & api) /// we infer the floating point type from the size of the APInt. The /// isIEEE argument distinguishes between PPC128 and IEEE128 (not meaningful /// when the size is anything else). -void -APFloat::initFromAPInt(const fltSemantics* Sem, const APInt& api) -{ - if (Sem == &IEEEhalf) +void IEEEFloat::initFromAPInt(const fltSemantics *Sem, const APInt &api) { + if (Sem == &semIEEEhalf) return initFromHalfAPInt(api); - if (Sem == &IEEEsingle) + if (Sem == &semIEEEsingle) return initFromFloatAPInt(api); - if (Sem == &IEEEdouble) + if (Sem == &semIEEEdouble) return initFromDoubleAPInt(api); - if (Sem == &x87DoubleExtended) + if (Sem == &semX87DoubleExtended) return initFromF80LongDoubleAPInt(api); - if (Sem == &IEEEquad) + if (Sem == &semIEEEquad) return initFromQuadrupleAPInt(api); - if (Sem == &PPCDoubleDouble) + if (Sem == &semPPCDoubleDoubleImpl) return initFromPPCDoubleDoubleAPInt(api); llvm_unreachable(nullptr); } -APFloat -APFloat::getAllOnesValue(unsigned BitWidth, bool isIEEE) -{ - switch (BitWidth) { - case 16: - return APFloat(IEEEhalf, APInt::getAllOnesValue(BitWidth)); - case 32: - return APFloat(IEEEsingle, APInt::getAllOnesValue(BitWidth)); - case 64: - return APFloat(IEEEdouble, APInt::getAllOnesValue(BitWidth)); - case 80: - return APFloat(x87DoubleExtended, APInt::getAllOnesValue(BitWidth)); - case 128: - if (isIEEE) - return APFloat(IEEEquad, APInt::getAllOnesValue(BitWidth)); - return APFloat(PPCDoubleDouble, APInt::getAllOnesValue(BitWidth)); - default: - llvm_unreachable("Unknown floating bit width"); - } -} - -unsigned APFloat::getSizeInBits(const fltSemantics &Sem) { - return Sem.sizeInBits; -} - /// Make this number the largest magnitude normal number in the given /// semantics. -void APFloat::makeLargest(bool Negative) { +void IEEEFloat::makeLargest(bool Negative) { // We want (in interchange format): // sign = {Negative} // exponent = 1..10 @@ -3423,7 +3297,7 @@ void APFloat::makeLargest(bool Negative) { /// Make this number the smallest magnitude denormal number in the given /// semantics. -void APFloat::makeSmallest(bool Negative) { +void IEEEFloat::makeSmallest(bool Negative) { // We want (in interchange format): // sign = {Negative} // exponent = 0..0 @@ -3434,55 +3308,30 @@ void APFloat::makeSmallest(bool Negative) { APInt::tcSet(significandParts(), 1, partCount()); } - -APFloat APFloat::getLargest(const fltSemantics &Sem, bool Negative) { - // We want (in interchange format): - // sign = {Negative} - // exponent = 1..10 - // significand = 1..1 - APFloat Val(Sem, uninitialized); - Val.makeLargest(Negative); - return Val; -} - -APFloat APFloat::getSmallest(const fltSemantics &Sem, bool Negative) { - // We want (in interchange format): - // sign = {Negative} - // exponent = 0..0 - // significand = 0..01 - APFloat Val(Sem, uninitialized); - Val.makeSmallest(Negative); - return Val; -} - -APFloat APFloat::getSmallestNormalized(const fltSemantics &Sem, bool Negative) { - APFloat Val(Sem, uninitialized); - +void IEEEFloat::makeSmallestNormalized(bool Negative) { // We want (in interchange format): // sign = {Negative} // exponent = 0..0 // significand = 10..0 - Val.category = fcNormal; - Val.zeroSignificand(); - Val.sign = Negative; - Val.exponent = Sem.minExponent; - Val.significandParts()[partCountForBits(Sem.precision)-1] |= - (((integerPart) 1) << ((Sem.precision - 1) % integerPartWidth)); - - return Val; + category = fcNormal; + zeroSignificand(); + sign = Negative; + exponent = semantics->minExponent; + significandParts()[partCountForBits(semantics->precision) - 1] |= + (((integerPart)1) << ((semantics->precision - 1) % integerPartWidth)); } -APFloat::APFloat(const fltSemantics &Sem, const APInt &API) { +IEEEFloat::IEEEFloat(const fltSemantics &Sem, const APInt &API) { initFromAPInt(&Sem, API); } -APFloat::APFloat(float f) { - initFromAPInt(&IEEEsingle, APInt::floatToBits(f)); +IEEEFloat::IEEEFloat(float f) { + initFromAPInt(&semIEEEsingle, APInt::floatToBits(f)); } -APFloat::APFloat(double d) { - initFromAPInt(&IEEEdouble, APInt::doubleToBits(d)); +IEEEFloat::IEEEFloat(double d) { + initFromAPInt(&semIEEEdouble, APInt::doubleToBits(d)); } namespace { @@ -3569,9 +3418,8 @@ namespace { } } -void APFloat::toString(SmallVectorImpl &Str, - unsigned FormatPrecision, - unsigned FormatMaxPadding) const { +void IEEEFloat::toString(SmallVectorImpl &Str, unsigned FormatPrecision, + unsigned FormatMaxPadding) const { switch (category) { case fcInfinity: if (isNegative()) @@ -3772,7 +3620,7 @@ void APFloat::toString(SmallVectorImpl &Str, Str.push_back(buffer[NDigits-I-1]); } -bool APFloat::getExactInverse(APFloat *inv) const { +bool IEEEFloat::getExactInverse(IEEEFloat *inv) const { // Special floats and denormals have no exact inverse. if (!isFiniteNonZero()) return false; @@ -3783,7 +3631,7 @@ bool APFloat::getExactInverse(APFloat *inv) const { return false; // Get the inverse. - APFloat reciprocal(*semantics, 1ULL); + IEEEFloat reciprocal(*semantics, 1ULL); if (reciprocal.divide(*this, rmNearestTiesToEven) != opOK) return false; @@ -3801,7 +3649,7 @@ bool APFloat::getExactInverse(APFloat *inv) const { return true; } -bool APFloat::isSignaling() const { +bool IEEEFloat::isSignaling() const { if (!isNaN()) return false; @@ -3814,7 +3662,7 @@ bool APFloat::isSignaling() const { /// /// *NOTE* since nextDown(x) = -nextUp(-x), we only implement nextUp with /// appropriate sign switching before/after the computation. -APFloat::opStatus APFloat::next(bool nextDown) { +IEEEFloat::opStatus IEEEFloat::next(bool nextDown) { // If we are performing nextDown, swap sign so we have -x. if (nextDown) changeSign(); @@ -3930,46 +3778,44 @@ APFloat::opStatus APFloat::next(bool nextDown) { return result; } -void -APFloat::makeInf(bool Negative) { +void IEEEFloat::makeInf(bool Negative) { category = fcInfinity; sign = Negative; exponent = semantics->maxExponent + 1; APInt::tcSet(significandParts(), 0, partCount()); } -void -APFloat::makeZero(bool Negative) { +void IEEEFloat::makeZero(bool Negative) { category = fcZero; sign = Negative; exponent = semantics->minExponent-1; APInt::tcSet(significandParts(), 0, partCount()); } -void APFloat::makeQuiet() { +void IEEEFloat::makeQuiet() { assert(isNaN()); APInt::tcSetBit(significandParts(), semantics->precision - 2); } -int llvm::ilogb(const APFloat &Arg) { +int ilogb(const IEEEFloat &Arg) { if (Arg.isNaN()) - return APFloat::IEK_NaN; + return IEEEFloat::IEK_NaN; if (Arg.isZero()) - return APFloat::IEK_Zero; + return IEEEFloat::IEK_Zero; if (Arg.isInfinity()) - return APFloat::IEK_Inf; + return IEEEFloat::IEK_Inf; if (!Arg.isDenormal()) return Arg.exponent; - APFloat Normalized(Arg); + IEEEFloat Normalized(Arg); int SignificandBits = Arg.getSemantics().precision - 1; Normalized.exponent += SignificandBits; - Normalized.normalize(APFloat::rmNearestTiesToEven, lfExactlyZero); + Normalized.normalize(IEEEFloat::rmNearestTiesToEven, lfExactlyZero); return Normalized.exponent - SignificandBits; } -APFloat llvm::scalbn(APFloat X, int Exp, APFloat::roundingMode RoundingMode) { +IEEEFloat scalbn(IEEEFloat X, int Exp, IEEEFloat::roundingMode RoundingMode) { auto MaxExp = X.getSemantics().maxExponent; auto MinExp = X.getSemantics().minExponent; @@ -3990,21 +3836,359 @@ APFloat llvm::scalbn(APFloat X, int Exp, APFloat::roundingMode RoundingMode) { return X; } -APFloat llvm::frexp(const APFloat &Val, int &Exp, APFloat::roundingMode RM) { +IEEEFloat frexp(const IEEEFloat &Val, int &Exp, IEEEFloat::roundingMode RM) { Exp = ilogb(Val); // Quiet signalling nans. - if (Exp == APFloat::IEK_NaN) { - APFloat Quiet(Val); + if (Exp == IEEEFloat::IEK_NaN) { + IEEEFloat Quiet(Val); Quiet.makeQuiet(); return Quiet; } - if (Exp == APFloat::IEK_Inf) + if (Exp == IEEEFloat::IEK_Inf) return Val; // 1 is added because frexp is defined to return a normalized fraction in // +/-[0.5, 1.0), rather than the usual +/-[1.0, 2.0). - Exp = Exp == APFloat::IEK_Zero ? 0 : Exp + 1; + Exp = Exp == IEEEFloat::IEK_Zero ? 0 : Exp + 1; return scalbn(Val, -Exp, RM); } + +DoubleAPFloat::DoubleAPFloat(const fltSemantics &S) + : Semantics(&S), Floats(new APFloat[2]{APFloat(semPPCDoubleDoubleImpl), + APFloat(semIEEEdouble)}) { + assert(Semantics == &semPPCDoubleDouble); +} + +DoubleAPFloat::DoubleAPFloat(const fltSemantics &S, uninitializedTag) + : Semantics(&S), + Floats(new APFloat[2]{APFloat(semPPCDoubleDoubleImpl, uninitialized), + APFloat(semIEEEdouble, uninitialized)}) { + assert(Semantics == &semPPCDoubleDouble); +} + +DoubleAPFloat::DoubleAPFloat(const fltSemantics &S, integerPart I) + : Semantics(&S), Floats(new APFloat[2]{APFloat(semPPCDoubleDoubleImpl, I), + APFloat(semIEEEdouble)}) { + assert(Semantics == &semPPCDoubleDouble); +} + +DoubleAPFloat::DoubleAPFloat(const fltSemantics &S, const APInt &I) + : Semantics(&S), Floats(new APFloat[2]{ + APFloat(semPPCDoubleDoubleImpl, I), + APFloat(semIEEEdouble, APInt(64, I.getRawData()[1]))}) { + assert(Semantics == &semPPCDoubleDouble); +} + +DoubleAPFloat::DoubleAPFloat(const fltSemantics &S, APFloat &&First, + APFloat &&Second) + : Semantics(&S), + Floats(new APFloat[2]{std::move(First), std::move(Second)}) { + assert(Semantics == &semPPCDoubleDouble); + // TODO Check for First == &IEEEdouble once the transition is done. + assert(&Floats[0].getSemantics() == &semPPCDoubleDoubleImpl || + &Floats[0].getSemantics() == &semIEEEdouble); + assert(&Floats[1].getSemantics() == &semIEEEdouble); +} + +DoubleAPFloat::DoubleAPFloat(const DoubleAPFloat &RHS) + : Semantics(RHS.Semantics), + Floats(RHS.Floats ? new APFloat[2]{APFloat(RHS.Floats[0]), + APFloat(RHS.Floats[1])} + : nullptr) { + assert(Semantics == &semPPCDoubleDouble); +} + +DoubleAPFloat::DoubleAPFloat(DoubleAPFloat &&RHS) + : Semantics(RHS.Semantics), Floats(std::move(RHS.Floats)) { + RHS.Semantics = &semBogus; + assert(Semantics == &semPPCDoubleDouble); +} + +DoubleAPFloat &DoubleAPFloat::operator=(const DoubleAPFloat &RHS) { + if (Semantics == RHS.Semantics && RHS.Floats) { + Floats[0] = RHS.Floats[0]; + Floats[1] = RHS.Floats[1]; + } else if (this != &RHS) { + this->~DoubleAPFloat(); + new (this) DoubleAPFloat(RHS); + } + return *this; +} + +// "Software for Doubled-Precision Floating-Point Computations", +// by Seppo Linnainmaa, ACM TOMS vol 7 no 3, September 1981, pages 272-283. +APFloat::opStatus DoubleAPFloat::addImpl(const APFloat &a, const APFloat &aa, + const APFloat &c, const APFloat &cc, + roundingMode RM) { + int Status = opOK; + APFloat z = a; + Status |= z.add(c, RM); + if (!z.isFinite()) { + if (!z.isInfinity()) { + Floats[0] = std::move(z); + Floats[1].makeZero(false); + return (opStatus)Status; + } + Status = opOK; + auto AComparedToC = a.compareAbsoluteValue(c); + z = cc; + Status |= z.add(aa, RM); + if (AComparedToC == APFloat::cmpGreaterThan) { + // z = cc + aa + c + a; + Status |= z.add(c, RM); + Status |= z.add(a, RM); + } else { + // z = cc + aa + a + c; + Status |= z.add(a, RM); + Status |= z.add(c, RM); + } + if (!z.isFinite()) { + Floats[0] = std::move(z); + Floats[1].makeZero(false); + return (opStatus)Status; + } + Floats[0] = z; + APFloat zz = aa; + Status |= zz.add(cc, RM); + if (AComparedToC == APFloat::cmpGreaterThan) { + // Floats[1] = a - z + c + zz; + Floats[1] = a; + Status |= Floats[1].subtract(z, RM); + Status |= Floats[1].add(c, RM); + Status |= Floats[1].add(zz, RM); + } else { + // Floats[1] = c - z + a + zz; + Floats[1] = c; + Status |= Floats[1].subtract(z, RM); + Status |= Floats[1].add(a, RM); + Status |= Floats[1].add(zz, RM); + } + } else { + // q = a - z; + APFloat q = a; + Status |= q.subtract(z, RM); + + // zz = q + c + (a - (q + z)) + aa + cc; + // Compute a - (q + z) as -((q + z) - a) to avoid temporary copies. + auto zz = q; + Status |= zz.add(c, RM); + Status |= q.add(z, RM); + Status |= q.subtract(a, RM); + q.changeSign(); + Status |= zz.add(q, RM); + Status |= zz.add(aa, RM); + Status |= zz.add(cc, RM); + if (zz.isZero() && !zz.isNegative()) { + Floats[0] = std::move(z); + Floats[1].makeZero(false); + return opOK; + } + Floats[0] = z; + Status |= Floats[0].add(zz, RM); + if (!Floats[0].isFinite()) { + Floats[1].makeZero(false); + return (opStatus)Status; + } + Floats[1] = std::move(z); + Status |= Floats[1].subtract(Floats[0], RM); + Status |= Floats[1].add(zz, RM); + } + return (opStatus)Status; +} + +APFloat::opStatus DoubleAPFloat::addWithSpecial(const DoubleAPFloat &LHS, + const DoubleAPFloat &RHS, + DoubleAPFloat &Out, + roundingMode RM) { + if (LHS.getCategory() == fcNaN) { + Out = LHS; + return opOK; + } + if (RHS.getCategory() == fcNaN) { + Out = RHS; + return opOK; + } + if (LHS.getCategory() == fcZero) { + Out = RHS; + return opOK; + } + if (RHS.getCategory() == fcZero) { + Out = LHS; + return opOK; + } + if (LHS.getCategory() == fcInfinity && RHS.getCategory() == fcInfinity && + LHS.isNegative() != RHS.isNegative()) { + Out.makeNaN(false, Out.isNegative(), nullptr); + return opInvalidOp; + } + if (LHS.getCategory() == fcInfinity) { + Out = LHS; + return opOK; + } + if (RHS.getCategory() == fcInfinity) { + Out = RHS; + return opOK; + } + assert(LHS.getCategory() == fcNormal && RHS.getCategory() == fcNormal); + + // These conversions will go away once PPCDoubleDoubleImpl goes away. + // (PPCDoubleDoubleImpl, IEEEDouble) -> (IEEEDouble, IEEEDouble) + APFloat A(semIEEEdouble, + APInt(64, LHS.Floats[0].bitcastToAPInt().getRawData()[0])), + AA(LHS.Floats[1]), + C(semIEEEdouble, APInt(64, RHS.Floats[0].bitcastToAPInt().getRawData()[0])), + CC(RHS.Floats[1]); + assert(&AA.getSemantics() == &semIEEEdouble); + assert(&CC.getSemantics() == &semIEEEdouble); + Out.Floats[0] = APFloat(semIEEEdouble); + assert(&Out.Floats[1].getSemantics() == &semIEEEdouble); + + auto Ret = Out.addImpl(A, AA, C, CC, RM); + + // (IEEEDouble, IEEEDouble) -> (PPCDoubleDoubleImpl, IEEEDouble) + uint64_t Buffer[] = {Out.Floats[0].bitcastToAPInt().getRawData()[0], + Out.Floats[1].bitcastToAPInt().getRawData()[0]}; + Out.Floats[0] = APFloat(semPPCDoubleDoubleImpl, APInt(128, 2, Buffer)); + return Ret; +} + +APFloat::opStatus DoubleAPFloat::add(const DoubleAPFloat &RHS, + roundingMode RM) { + return addWithSpecial(*this, RHS, *this, RM); +} + +APFloat::opStatus DoubleAPFloat::subtract(const DoubleAPFloat &RHS, + roundingMode RM) { + changeSign(); + auto Ret = add(RHS, RM); + changeSign(); + return Ret; +} + +void DoubleAPFloat::changeSign() { + Floats[0].changeSign(); + Floats[1].changeSign(); +} + +APFloat::cmpResult +DoubleAPFloat::compareAbsoluteValue(const DoubleAPFloat &RHS) const { + auto Result = Floats[0].compareAbsoluteValue(RHS.Floats[0]); + if (Result != cmpEqual) + return Result; + Result = Floats[1].compareAbsoluteValue(RHS.Floats[1]); + if (Result == cmpLessThan || Result == cmpGreaterThan) { + auto Against = Floats[0].isNegative() ^ Floats[1].isNegative(); + auto RHSAgainst = RHS.Floats[0].isNegative() ^ RHS.Floats[1].isNegative(); + if (Against && !RHSAgainst) + return cmpLessThan; + if (!Against && RHSAgainst) + return cmpGreaterThan; + if (!Against && !RHSAgainst) + return Result; + if (Against && RHSAgainst) + return (cmpResult)(cmpLessThan + cmpGreaterThan - Result); + } + return Result; +} + +APFloat::fltCategory DoubleAPFloat::getCategory() const { + return Floats[0].getCategory(); +} + +bool DoubleAPFloat::isNegative() const { return Floats[0].isNegative(); } + +void DoubleAPFloat::makeInf(bool Neg) { + Floats[0].makeInf(Neg); + Floats[1].makeZero(false); +} + +void DoubleAPFloat::makeNaN(bool SNaN, bool Neg, const APInt *fill) { + Floats[0].makeNaN(SNaN, Neg, fill); + Floats[1].makeZero(false); +} + +} // End detail namespace + +APFloat::Storage::Storage(IEEEFloat F, const fltSemantics &Semantics) { + if (usesLayout(Semantics)) { + new (&IEEE) IEEEFloat(std::move(F)); + return; + } + if (usesLayout(Semantics)) { + new (&Double) + DoubleAPFloat(Semantics, APFloat(std::move(F), F.getSemantics()), + APFloat(semIEEEdouble)); + return; + } + llvm_unreachable("Unexpected semantics"); +} + +APFloat::opStatus APFloat::convertFromString(StringRef Str, roundingMode RM) { + return getIEEE().convertFromString(Str, RM); +} + +hash_code hash_value(const APFloat &Arg) { return hash_value(Arg.getIEEE()); } + +APFloat::APFloat(const fltSemantics &Semantics, StringRef S) + : APFloat(Semantics) { + convertFromString(S, rmNearestTiesToEven); +} + +APFloat::opStatus APFloat::convert(const fltSemantics &ToSemantics, + roundingMode RM, bool *losesInfo) { + if (&getSemantics() == &ToSemantics) + return opOK; + if (usesLayout(getSemantics()) && + usesLayout(ToSemantics)) + return U.IEEE.convert(ToSemantics, RM, losesInfo); + if (usesLayout(getSemantics()) && + usesLayout(ToSemantics)) { + assert(&ToSemantics == &semPPCDoubleDouble); + auto Ret = U.IEEE.convert(semPPCDoubleDoubleImpl, RM, losesInfo); + *this = APFloat(DoubleAPFloat(semPPCDoubleDouble, std::move(*this), + APFloat(semIEEEdouble)), + ToSemantics); + return Ret; + } + if (usesLayout(getSemantics()) && + usesLayout(ToSemantics)) { + auto Ret = getIEEE().convert(ToSemantics, RM, losesInfo); + *this = APFloat(std::move(getIEEE()), ToSemantics); + return Ret; + } + llvm_unreachable("Unexpected semantics"); +} + +APFloat APFloat::getAllOnesValue(unsigned BitWidth, bool isIEEE) { + if (isIEEE) { + switch (BitWidth) { + case 16: + return APFloat(semIEEEhalf, APInt::getAllOnesValue(BitWidth)); + case 32: + return APFloat(semIEEEsingle, APInt::getAllOnesValue(BitWidth)); + case 64: + return APFloat(semIEEEdouble, APInt::getAllOnesValue(BitWidth)); + case 80: + return APFloat(semX87DoubleExtended, APInt::getAllOnesValue(BitWidth)); + case 128: + return APFloat(semIEEEquad, APInt::getAllOnesValue(BitWidth)); + default: + llvm_unreachable("Unknown floating bit width"); + } + } else { + assert(BitWidth == 128); + return APFloat(semPPCDoubleDouble, APInt::getAllOnesValue(BitWidth)); + } +} + +void APFloat::print(raw_ostream &OS) const { + SmallVector Buffer; + toString(Buffer); + OS << Buffer << "\n"; +} + +void APFloat::dump() const { print(dbgs()); } + +} // End llvm namespace -- cgit v1.1