summaryrefslogtreecommitdiffstats
path: root/lib/msun/src
Commit message (Collapse)AuthorAgeFilesLines
...
* Use a better algorithm for reducing the error in __kernel_cos[f]().bde2005-10-262-51/+22
| | | | | | | | | | | | | | | | | This supersedes the fix for the old algorithm in rev.1.8 of k_cosf.c. I want this change mainly because it is an optimization. It helps make software cos[f](x) and sin[f](x) faster than the i387 hardware versions for small x. It is also a simplification, and reduces the maximum relative error for cosf() and sinf() on machines like amd64 from about 0.87 ulps to about 0.80 ulps. It was validated for cosf() and sinf() by exhaustive testing. Exhaustive testing is not possible for cos() and sin(), but ucbtest reports a similar reduction for the worst case found by non-exhaustive testing. ucbtest's non-exhaustive testing seems to be good enough to find problems in algorithms but not maximum relative errors when there are spikes. E.g., short runs of it find only 3 ulp error where the i387 hardware cos() has an error of about 2**40 ulps near pi/2.
* More fixes for arg reduction near pi/2 on systems with broken assignmentbde2005-10-251-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to floats (mainly i386's). All errors of more than 1 ulp for float precision trig functions were supposed to have been fixed; however, compiling with gcc -O2 uncovered 18250 more such errors for cosf(), with a maximum error of 1.409 ulps. Use essentially the same fix as in rev.1.8 of k_rem_pio2f.c (access a non-volatile variable as a volatile). Here the -O1 case apparently worked because the variable is in a 2-element array and it takes -O2 to mess up such a variable by putting it in a register. The maximum error for cosf() on i386 with gcc -O2 is now 0.5467 (it is still 0.5650 with gcc -O1). This shows that -O2 still causes some extra precision, but the extra precision is now good. Extra precision is harmful mainly for implementing extra precision in software. We want to represent x+y as w+r where both "+" operations are in infinite precision and r is tiny compared with w. There is a standard algorithm for this (Knuth (1981) 4.2.2 Theorem C), and fdlibm uses this routinely, but the algorithm requires w and r to have the same precision as x and y. w is just x+y (calculated in the same finite precision as x and y), and r is a tiny correction term. The i386 gcc bugs tend to give extra precision in w, and then using this extra precision in the calculation of r results in the correction mostly staying in w and being missing from r. There still tends to be no problem if the result is a simple expression involving w and r -- modulo spills, w keeps its extra precision and r remains the right correction for this wrong w. However, here we want to pass w and r to extern functions. Extra precision is not retained in function args, so w gets fixed up, but the change to the tiny r is tinier, so r almost remains as a wrong correction for the right w.
* Moved the optimization for tiny x from __kernel_{cos,sin}[f](x) tobde2005-10-248-22/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | {cos_sin}[f](x) so that x doesn't need to be reclassified in the "kernel" functions to determine if it is tiny (it still needs to be reclassified in the cosine case for other reasons that will go away). This optimization is quite large for exponentially distributed x, since x is tiny for almost half of the domain, but it is a pessimization for uniformally distributed x since it takes a little time for all cases but rarely applies. Arg reduction on exponentially distributed x rarely gives a tiny x unless the reduction is null, so it is best to only do the optimization if the initial x is tiny, which is what this commit arranges. The imediate result is an average optimization of 1.4% relative to the previous version in a case that doesn't favour the optimization (double cos(x) on all float x) and a large pessimization for the relatively unimportant cases of lgamma[f][_r](x) on tiny, negative, exponentially distributed x. The optimization should be recovered for lgamma*() as part of fixing lgamma*()'s low-quality arg reduction. Fixed various wrong constants for the cutoff for "tiny". For cosine, the cutoff is when x**2/2! == {FLT or DBL}_EPSILON/2. We round down to an integral power of 2 (and for cos() reduce the power by another 1) because the exact cutoff doesn't matter and would take more work to determine. For sine, the exact cutoff is larger due to the ration of terms being x**2/3! instead of x**2/2!, but we use the same cutoff as for cosine. We now use a cutoff of 2**-27 for double precision and 2**-12 for single precision. 2**-27 was used in all cases but was misspelled 2**27 in comments. Wrong and sloppy cutoffs just cause missed optimizations (provided the rounding mode is to nearest -- other modes just aren't supported).
* Fixed range reduction for large multiples of pi/2 on systems withbde2005-10-111-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | broken assignment to floats (e.g., i386 with gcc -O, but not amd64 or ia64; i386 with gcc -O0 worked accidentally). Use an unnamed volatile temporary variable to trick gcc -O into clipping extra precision on assignment. It's surprising that only 1 place needed to be changed. For tanf() on i386 with gcc -O, the bug caused errors > 1 ulp with a density of 2.3% for args larger in magnitude than 128*pi/2, with a maximum error of 1.624 ulps. After this fix, exhaustive testing shows that range reduction for floats works as intended assuming that it is in within a factor of about 2^16 of working as intended for doubles. It provides >= 8 extra bits of precision for all ranges. On i386: range max error in double/single ulps extra precision ----- ------------------------------- --------------- 0 to 3*pi/4 0x000d3132 / 0.0016 9+ bits 3*pi/4 to 128*pi/2 0x00160445 / 0.0027 8+ 128*pi/2 to +Inf 0x00000030 / 0.00000009 23+ 128*pi/2 up, -O0 before fix 0x00000030 / 0.00000009 23+ 128*pi/2 up, -O1 before fix 0x10000000 / 0.5 1 The 23+ bits of extra precision for large multiples corresponds to almost perfect reduction to a pair of floats (24 extra would be perfect). After this fix, the maximum relative error (relative to the corresponding fdlibm double precision function) is < 1 ulp for all basic trig functions on all 2^32 float args on all machines tested: amd64 ia64 i386-O0 i386-O1 ------ ------ ------ ------ cosf: 0.8681 0.8681 0.7927 0.5650 sinf: 0.8733 0.8610 0.7849 0.5651 tanf: 0.9708 0.9329 0.9329 0.7035
* Fixed range reduction near (but not very near) medium-sized multiplesbde2005-10-101-3/+18
| | | | | | | | | | of pi/2 (1 line) and expand a comment about related magic (many lines). The bug was essentially the same as for the +-pi/2 case (a mistranslated mask), but was smaller so it only significantly affected multiples starting near +-13*pi/2. At least on amd64, for cosf() on all 2^32 float args, the bug caused 128 errors of >= 1 ulp, with a maximum error of 1.2393 ulps.
* Fix numerous errors of >= 1 ulp for cosf(x) and sinf(x) (1 line)bde2005-10-091-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | and add a comment about related magic (many lines)). __kernel_cos[f]() needs a trick to reduce the error to below 1 ulp when |x| >= 0.3 for the range-reduced x. Modulo other bugs, naive code that doesn't use the trick would have an error of >= 1 ulp in about 0.00006% of cases when |x| >= 0.3 for the unreduced x, with a maximum relative error of about 1.03 ulps. Mistransation of the trick from the double precision case resulted in errors in about 0.2% of cases, with a maximum relative error of about 1.3 ulps. The mistranslation involved not doing implicit masking of the 32-bit float word corresponding to to implicit masking of the lower 32-bit double word by clearing it. sinf() uses __kernel_cosf() for half of all cases so its errors from this bug are similar. tanf() is not affected. The error bounds in the above and in my other recent commit messages are for amd64. Extra precision for floats on i386's accidentally masks this bug, but only if k_cosf.c is compiled with -O. Although the extra precision helps here, this is accidental and depends on longstanding gcc precision bugs (not clipping extra precision on assignment...), and the gcc bugs are mostly avoided by compiling without -O. I now develop libm mainly on amd64 systems to simplify error detection and debugging.
* Oops, the last-minute optimization in rev.1.8 wasn't a good idea. Thebde2005-10-091-7/+18
| | | | | | | | | | | | 17+17+24 bit pi/2 must only be used when subtraction of the first 2 terms in it from the arg is exact. This happens iff the the arg in bits is one of the 2**17[-1] values on each side of (float)(pi/2). Revert to the algorithm in rev.1.7 and only fix its threshold for using the 3-term pi/2. Use the threshold that maximizes the number of values for which the 3-term pi/2 is used, subject to not changing the algorithm for comparing with the threshold. The 3-term pi/2 ends up being used for about half of its usable range (about 64K values on each side).
* Fixed syntax error (a missing brace) in previous commit.bde2005-10-081-0/+1
|
* Fixed range reduction near (but not very near) +-pi/2. A bug causedbde2005-10-081-19/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | a maximum error of 2.905 ulps for cosf(), but the algorithm for cosf() is good for < 1 ulps and happens to give perfect rounding (< 0.5 ulps) near +-pi/2 except for the bug. The extra relative errors for tanf() were similar (slightly larger). The bug didn't affect sinf() since sinf'(+-pi/2) is 0. For range reduction in ~[-3pi/4, -pi/4] and ~[pi/4, 3pi/4] we must subtract +-pi/2 and the only complication is that this must be done in extra precision. We have handy 17+24-bit and 17+17+24-bit approximations to pi/2. If we always used the former then we would lose up to 24 bits of accuracy due to cancelation of leading bits, but we need to keep at least 24 bits plus a guard digit or 2, and should keep as many guard bits as efficiency permits. So we used the less-precise pi/2 not very near +-pi/2 and switched to using the more-precise pi/2 very near +-pi/2. However, we got the threshold for the switch wrong by allowing 19 bits to cancel, so we ended up with only 21 or 22 bits of accuracy in some cases, which is even worse than naively subtracting pi/2 would have done. Exhaustive checking shows that allowing only 17 bits to cancel (min. accuracy ~24 bits) is sufficient to reduce the maximum error for cosf() near +-pi/2 to 0.726 ulps, but allowing only 6 bits to cancel (min. accuracy ~35-bits) happens to give perfect rounding for cosf() at little extra cost so we prefer that. We actually (in effect) allow 0 bits to cancel and always use the 17+17+24-bit pi/2 (min. accuracy ~41 bits). This is simpler and probably always more efficient too. Classifying args to avoid using this pi/2 when it is not needed takes several extra integer operations and a branch, but just using it takes only 1 FP operation. The patch also fixes misspelling of 17 as 24 in many comments. For the double-precision version, the magic numbers include 33+53 bits for the less-precise pi/2 and (53-32-1 = 20) bits being allowed to cancel, so there are ~33-20 = 13 guard bits. This is sufficient except probably for perfect rounding. The more-precise pi/2 has 33+33+53 bits and we still waste time classifying args to avoid using it. The bug is apparently from mistranslation of the magic 32 in 53-32-1. The number of bits allowed to cancel is not critical and we use 32 for double precision because it allows efficient classification using a 32-bit comparison. For float precision, we must use an explicit mask, and there are fewer bits so there is less margin for error in their allocation. The 32 got reduced to 4 but should have been reduced almost in proportion to the reduction of mantissa bits.
* Revert the last change, the conversion from long double to double can raisestefanf2005-04-283-12/+12
| | | | | | unwanted underflow exceptions. Pointed out by: das
* Use double additions to raise the inexact exception to work around problemsstefanf2005-04-223-12/+12
| | | | with long double addition on sparc64.
* Fix raising the inexact exception (FE_INEXACT) if the result differs from thestefanf2005-04-223-18/+34
| | | | | | argument. Noticed by: das
* Implement truncl() based on floorl().das2005-04-162-1/+63
|
* Add roundl(), lroundl(), and llroundl().das2005-04-084-1/+78
|
* These files should include s_lround.c instead of s_lrint.c.das2005-04-083-3/+3
| | | | This only matters for efficiency, not for correctness.
* Fix a (coincidentally harmless) bug.das2005-04-081-5/+4
|
* Fix a long-standing bug in k_rem_pio2(), which led to large errors whendas2005-04-051-1/+1
| | | | | | tanf() was called with big arguments close to multiples of pi/2. Reported by: ucbtest via bde
* Implement exp2() and exp2f().das2005-04-053-0/+532
|
* Implement and document remquo() and remquof().das2005-03-253-0/+275
|
* Fix the double rounding problem with subnormals, anddas2005-03-182-16/+36
| | | | remove the XXX comments, which no longer apply.
* Add missing prototypes for fma() and fmaf(), and remove an inaccuratedas2005-03-181-1/+2
| | | | comment.
* Replace strong references with weak references. There's nodas2005-03-074-7/+7
| | | | | | | | particularly good reason to do this, except that __strong_reference does type checking, whereas __weak_reference does not. On Alpha, the compiler won't accept a 'long double' parameter in place of a 'double' parameter even thought the two types are identical.
* Remove an obsolete sentence from a comment.stefanf2005-03-071-2/+1
|
* - If z is 0, one of x or y is 0, and the other is infinite, raisedas2005-03-071-2/+18
| | | | | | | an invalid exception and return an NaN. - If a long double has 113 bits of precision, implement fma in terms of simple long double arithmetic instead of complicated double arithmetic. - If a long double is the same as a double, alias fma as fmal.
* Remove ldexp and ldexpf. The former is in libc, and the latter isdas2005-03-072-59/+0
| | | | | | identical to scalbnf, which is now aliased as ldexpf. Note that the old implementations made the mistake of setting errno and were the only libm routines to do so.
* - Define FP_FAST_FMA for sparc64, since fma() is now implemented usingdas2005-03-071-4/+12
| | | | | | | sparc64's 128-bit long doubles. - Define FP_FAST_FMAL for ia64. - Prototypes for fmal, frexpl, ldexpl, nextafterl, nexttoward{,f,l}, scalblnl, and scalbnl.
* Alias scalbn as ldexpl and scalbnl on platforms where long double isdas2005-03-071-0/+8
| | | | the same as double.
* - Implement scalblnl.das2005-03-071-2/+34
| | | | | | - In scalbln and scalblnf, check the bounds of the second argument. This is probably unnecessary, but strictly speaking, we should report an error if someone tries to compute scalbln(x, INT_MAX + 1ll).
* Implement nexttowardf. This is used on both platforms with 11-bitdas2005-03-071-0/+60
| | | | exponents and platforms with 15-bit exponents for long doubles.
* Implement nexttoward and nextafterl; the latter is also known asdas2005-03-072-0/+155
| | | | | | | | | | | nexttowardl. These are not needed on machines where long doubles look like IEEE-754 doubles, so the implementation only supports the usual long double formats with 15-bit exponents. Anything bizarre, such as machines where floating-point and integer data have different endianness, will cause problems. This is the case with big endian ia64 according to libc/ia64/_fpmath.h. Please contact me if you managed to get a machine running this way.
* - Try harder to trick gcc into not optimizing away statementsdas2005-03-072-8/+19
| | | | | | that are intended to raise underflow and inexact exceptions. - On systems where long double is the same as double, nextafter should be aliased as nexttoward, nexttowardl, and nextafterl.
* Implement frexpl.das2005-03-071-0/+62
|
* Alias frexp as frexpl on platforms where a long double is the same asdas2005-03-071-0/+7
| | | | a double.
* Implement fmal.das2005-03-071-0/+170
|
* Add scalbnl, also known as as ldexpl.das2005-03-071-0/+71
|
* Alias scalbnf as ldexpf. The two are identical in binarydas2005-03-071-0/+4
| | | | floating-point formats.
* Revert rev 1.8, which causes small (e.g. 2 ulp) errors for somedas2005-02-241-8/+13
| | | | | | | | | inputs. The trouble with replacing two floats with a double is that the latter has 6 extra bits of precision, which actually hurts accuracy in many cases. All of the constants are optimal when float arithmetic is used, and would need to be recomputed to do this right. Noticed by: bde (ucbtest)
* Use double arithmetic instead of simulating it with two floats. Thisdas2005-02-211-13/+8
| | | | | | | results in a performance gain on the order of 10% for amd64 (sledge), ia64 (pluto1), i386+SSE (Pentium 4), and sparc64 (panther), and a negligible improvement for i386 without SSE. (The i386 port still uses the hardware instruction, though.)
* Fix a small scripting snafu in the previous revision.das2005-02-041-2/+2
|
* Reduce diffs against vendor source (Sun fdlibm 5.3).das2005-02-0428-428/+476
|
* Remove wrappers and other cruft intended to support SVID, mistakes indas2005-02-0457-3225/+60
| | | | | | | C90, and other arcana. Most of these features were never fully supported or enabled by default. Ok: bde, stefanf
* Update comment to reflect the code change in the previous revision.das2005-01-232-2/+2
| | | | Noticed by: ceri
* If x == y, return y, not x. C99 (though not IEEE 754) requires thatdas2005-01-232-2/+2
| | | | nextafter(+0.0, -0.0) returns -0.0 and nextafter(-0.0, +0.0) returns +0.0.
* Add fma() and fmaf(), which implement a fused multiply-add operation.das2005-01-223-0/+230
|
* Most libm routines depend on the rounding mode and/or set exceptiondas2005-01-151-31/+29
| | | | | | | | | | | | | | flags, so they are not pure. Remove the __pure2 annotation from them. I believe that the following routines and their float and long double counterparts are the only ones here that can be __pure2: copysign is* fabs finite fmax fmin fpclassify ilogb nan signbit When gcc supports FENV_ACCESS, perhaps there will be a new annotation that allows the other functions to be considered pure when FENV_ACCESS is off. Discussed with: bde
* Braino. Revert rev 1.50.das2005-01-151-0/+7
| | | | Pointy hat to: das
* Set math_errhandling to MATH_ERREXCEPT. Now that we have fenv.h, wedas2005-01-141-1/+1
| | | | | | basically support this, subject to gcc's lack of FENV_ACCESS support. In any case, the previous setting of math_errhandling to 0 is not allowed by POSIX.
* Remove some #if 0'd code.das2005-01-141-7/+0
|
* The isnormal() in rev 1.2 should have been isfinite() so subnormalsdas2005-01-132-2/+2
| | | | | | round correctly. Noticed by: stefanf
* Implement and document ceill().stefanf2005-01-132-1/+98
|
OpenPOWER on IntegriCloud