| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
exp2(i/TBLSIZE) * p(z) instead of only for the final multiplication
and addition. This fixes the code to match the comment that the maximum
error is 0.5010 ulps (except on machines that evaluate float expressions
in extra precision, e.g., i386's, where the evaluation was already
in extra precision).
Fix and expand the comment about use of double precision.
The relative roundoff error from evaluating p(z) in non-extra precision
was about 16 times larger than in exp2() because the interval length
is 16 times smaller. Its maximum was at least P1 * (1.0 ulps) *
max(|z|) ~= log(2) * 1.0 * 1/32 ~= 0.0217 ulps (1.0 ulps from the
addition in (1 + P1*z) with a cancelation error when z ~= -1/32). The
actual final maximum was 0.5313 ulps, of which 0.0303 ulps must have
come from the additional roundoff error in p(z). I can't explain why
the additional roundoff error was almost 3/2 times larger than the rough
estimate.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
precision. The new polynomial has degree 4 instead of 10, and a maximum
error of 2**-30.04 ulps instead of 2**-33.15. This doesn't affect the
final error significantly; the maximum error was and is about 0.5015
ulps on i386 -O1, and the number of cases with an error of > 0.5 ulps
is increased from 13851 to 14407.
Note that the error is only this close to 0.5 ulps due to excessive
extra precision caused by compiler bugs on i386. The extra precision
could be obtained intentionally, and is useful for keeping the error
of the hyperbolic float functions below 1 ulp, since these functions
are implemented using expm1f. My recent change for scaling by 2**k
had the unintentional side effect of retaining extra precision for
longer, so callers of expm1f see errors of more like 0.0015 ulps than
0.5015 ulps, and for the hyperbolic functions this reduces the maximum
error from nearly about 2 ulps to about 0.75 ulps.
This is about 10% faster on i386 (A64). expm1* is still very slow,
but now the float version is actually significantly faster. The
algorithm is very sophisticated but not very good except on machines
with fast division.
|
| |
|
|
|
|
|
| |
it must have the same sign as the arg in all rounding modes, but it was
always +0.0L.
|
|
|
|
|
|
|
| |
denormals, and logb() remained broken after 1.10 because the fix for
logbf() was incompletely translated.
Convert to __FBSDID().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
exponent bits of the reduced result, construct 2**k (hopefully in
parallel with the construction of the reduced result) and multiply by
it. This tends to be much faster if the construction of 2**k is
actually in parallel, and might be faster even with no parallelism
since adjustment of the exponent requires a read-modify-wrtite at an
unfortunate time for pipelines.
In some cases involving exp2* on amd64 (A64), this change saves about
40 cycles or 30%. I think it is inherently only about 12 cycles faster
in these cases and the rest of the speedup is from partly-accidentally
avoiding compiler pessimizations (the construction of 2**k is now
manually scheduled for good results, and -O2 doesn't always mess this
up). In most cases on amd64 (A64) and i386 (A64) the speedup is about
20 cycles. The worst case that I found is expf on ia64 where this
change is a pessimization of about 10 cycles or 5%. The manual
scheduling for plain exp[f] is harder and not as tuned.
Details specific to expm1*:
- the saving is closer to 12 cycles than to 40 for expm1* on i386 (A64).
For some reason it is much larger for negative args.
- also convert to __FBSDID().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
exponent bits of the reduced result, construct 2**k (hopefully in
parallel with the construction of the reduced result) and multiply by
it. This tends to be much faster if the construction of 2**k is
actually in parallel, and might be faster even with no parallelism
since adjustment of the exponent requires a read-modify-wrtite at an
unfortunate time for pipelines.
In some cases involving exp2* on amd64 (A64), this change saves about
40 cycles or 30%. I think it is inherently only about 12 cycles faster
in these cases and the rest of the speedup is from partly-accidentally
avoiding compiler pessimizations (the construction of 2**k is now
manually scheduled for good results, and -O2 doesn't always mess this
up). In most cases on amd64 (A64) and i386 (A64) the speedup is about
20 cycles. The worst case that I found is expf on ia64 where this
change is a pessimization of about 10 cycles or 5%. The manual
scheduling for plain exp[f] is harder and not as tuned.
This change ld128/s_exp2l.c has not been tested.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
that is specialized for float precision. The new polynomial has degree
5 instead of 11, and a maximum error of 2**-27.74 ulps instead
of 2**-30.64. This doesn't affect the final error significantly; the
maximum error was and is about 0.9101 ulps on amd64 -01 and the number
of cases with an error of > 0.5 ulps is actually reduced by epsilon
despite the larger error in the polynomial.
This is about 15% faster on amd64 (A64), i386 (A64) and ia64. The asm
version is still used instead of this on i386 since it is faster and
more accurate.
|
|
|
|
|
| |
float precision. This fixes some double rounding problems for
subnormals and simplifies things a bit.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cases which are used mainly by regression tests.
As usual, the cutoff for tiny args was not correctly translated to
float precision. It was 2**-54 but 2**-24 works. It must be about
2**-precision, since the error from approximating log(1+x) by x is
about the same as |x|. Exhaustive testing shows that 2**-24 gives
perfect rounding in round-to-nearest mode.
Similarly for the cutoff for being small, except this is not used by
so many other functions. It was 2**-29 but 2**-15 works. It must be
a bit smaller than sqrt(2**-precision), since the error from
approximating log(1+x) by x-x*x/2 is about the same as x*x. Exhaustive
testing shows that 2**-15 gives a maximum error of 0.5052 ulps in
round-to-nearest-mode. The algorithm for the general case is only good
for 0.8388 ulps, so this is sufficient (but it loses slightly on i386 --
then extra precision gives 0.5032 ulps for the general case).
While investigating this, I noticed that optimizing the usual case by
falling into a middle case involving a simple polynomial evaluation
(return x-x*x/2 instead of x here) is not such a good idea since it
gives an enormous pessimization of tinier args on machines for which
denormals are slow. Float x*x/2 is denormal when |x| ~< 2**-64 and
x*x/2 is evaluated in float precision, so it can easily be denormal
for normal x. This is even more interesting for general polynomial
evaluations. Multiplying out large powers of x is normally a good
optimization since it reduces dependencies, but it creates denormals
starting with quite large x.
|
|
|
|
|
|
|
|
|
| |
forget to translate "float" to "double".
ucbtest didn't detect the bug, but exhaustive testing of the float
case relative to the double case eventually did. The bug only affects
args x with |x| ~> 2**19*(pi/2) on non-i386 (i386 is broken in a
different way for large args).
|
|
|
|
|
|
|
| |
it should never have existed and it has not been used for many years
(floats are reduced faster using doubles). All relevant changes (just
the workaround for broken assignment) have been merged to the double
version.
|
|
|
|
|
|
|
|
|
| |
there is a problem with non-floats (when i386 defaults to extra
precision). This essentially restores yesterday's behaviour for doubles
on i386 (since generic rint() isn't used and everywhere else assumed
working assignment), but for arches that use the generic rint() it
finishes restoring some of 1995's behaviour (don't waste time doing
unnecessary store/load).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
variable hack for exp2f() only.
The volatile variable had a surprisingly large cost for exp2f() -- 19
cycles or 15% on i386 in the worst case observed. This is only partly
explained by there being several references to the variable, only one
of which benefited from it being volatile. Arches that have working
assignment are likely to benefit even more from not having any volatile
variable.
exp2() now has a chance of working with extra precision on i386.
exp2() has even more references to the variable, so it would have been
pessimized more by simply declaring the variable as volatile. Even
the temporary volatile variable for STRICT_ASSIGN costs 5-10% on i386,
(A64) so I will change STRICT_ASSIGN() to do an ordinary assignment
until i386 defaults to extra precision.
|
|
|
|
|
|
|
|
|
|
|
|
| |
instead of a volatile cast hack for the float version only. The cast
hack broke with gcc-4, but this was harmless since the float version
hasn't been used for a few years. Merge from the float version so
that the double version has a chance of working on i386 with extra
precision.
See k_rem_pio2f.c rev.1.8 for the original hack.
Convert to _FBSDID().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
hack for log1pf() only. The cast hack broke with gcc-4, resulting in
~1 million errors of more than 1 ulp, with a maximum error of ~1.5 ulps.
Now the maximum error for log1pf() on i386 is 0.5034 ulps again (this
depends on extra precision), and log1p() has a chance of working with
extra precision.
See s_log1pf.c 1.8 for the original hack. (It claims only 62343 large
errors).
Convert to _FBSDID(). Another thing broken with gcc-4 is the static
const hack used for rcsids.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
around assignments not working for gcc on i386. Now volatile hacks
for rint() and rintf() don't needlessly pessimize so many arches
and the remaining pessimizations (for arm and powerpc) can be avoided
centrally.
This cleans up after s_rint.c 1.3 and 1.13 and s_rintf.c 1.3 and 1.9:
- s_rint.c 1.13 broke 1.3 by only using a volatile cast hack in 1 place
when it was needed in 2 places, and the volatile cast hack stopped
working with gcc-4. These bugs only affected correctness tests on
i386 since i386 normally uses asm rint() and doesn't support the
extra precision mode that would break assignments of doubles.
- s_rintf.c 1.9 improved(?) on 1.3 by using a volatile variable hack
instead of an extra-precision variable hack, but it declared 2
variables as volatile when only 1 variable needed to be volatile.
This only affected speed tests on i386 since i386 uses asm rintf().
|
|
|
|
|
| |
exception when they're supposed to. Previously, gcc -O2 was optimizing
away the statement that generated it.
|
|
|
|
|
|
|
|
| |
long doubles (i386, amd64, ia64) and one for machines with 128-bit
long doubles (sparc64). Other platforms use the double version.
I've only done runtime testing on i386.
Thanks to bde@ for helpful discussions and bugfixes.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
assignments and casts don't clip extra precision, if any. The
implementation is to assign to a temporary volatile variable and read
the result back to assign to the original lvalue.
lib/msun currently 2 different hard-coded hacks to avoid the problem
in just a few places and needs it in a few more places. One variant
uses volatile for the original lvalue. This works but is slower than
necessary. Another temporarily casts the lvalue to volatile. This
broke with gcc-4.2.1 or earlier (gcc now stores to the lvalue but
doesn't load from it).
|
|
|
|
| |
Submitted by: bde (although these aren't all of his changes)
|
|
|
|
| |
Thanks to bde@ for feedback and testing of rintl().
|
|
|
|
|
| |
that would overflow.
- Style fixes and improved handling of NaNs suggested by bde.
|
|
|
|
|
|
|
| |
bde is going to commit logl() Real Soon Now.
I'm just trying to slow him down with merge conflicts.
Noticed by: bde
|
|
|
|
|
|
|
| |
committed yet, but the declaration leaked in when I added nan() and
friends.
Reported by: pav
|
|
|
|
|
|
|
|
|
|
|
|
| |
my original implementation made both use the same code. Unfortunately,
this meant libm depended on a vendor header at compile time and previously-
unexposed vendor bits in libc at runtime.
Hence, I just wrote my own version of the relevant vendor routine. As it
turns out, mine has a factor of 8 fewer of lines of code, and is a bit more
readable anyway. The strtod() and *scanf() routines still use vendor code.
Reviewed by: bde
|
|
|
|
|
|
|
|
| |
someone thought it would be a good idea to copy z_abs() to libm in 1994.
However, it's never been declared or documented anywhere, and I'm
reasonably confident that nobody uses it.
Discussed with: bde, deischen, kan
|
| |
|
|
|
|
|
|
|
| |
adds two new directories in msun: ld80 and ld128. These are for
long double functions specific to the 80-bit long double format
used on x86-derived architectures, and the 128-bit format used on
sparc64, respectively.
|
| |
|
|
|
|
| |
Rotting in an old src tree since: March 2005
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
hundreds of similar aliasing bugs, but all except this one seem to have
been fixed by Cygnus and/or NetBSD before the modified version of fdlibm
was imported into FreeBSD in 1994.
PR: standards/113147
Submitted by: Steve Kargl <sgk@troutmask.apl.washington.edu>
|
|
|
|
|
|
| |
involving moving the check for x == 0). The savings in cycles are
smaller for cbrtf() than for cbrt(), and positive in all measured cases
with gcc-3.4.4, but still very machine/compiler-dependent.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
that ints have >= 31 value bits elsewhere. s/int/int32_t/ seems to
have been done too globally for all other files in msun/src before
msun/ was imported into FreeBSD.
Minor fixes in comments.
e_lgamma_r.c:
Describe special cases in more detail:
- exception for lgamma(0) and lgamma(neg.integer)
- lgamma(-Inf) = Inf. This is wrong but is required by C99 Annex F. I
hope to change this.
|
| |
|
| |
|
|
|
|
|
| |
seem to have assembly versions for all architectures, but it can't
hurt to fix it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In e_log.c, there was just a off-by-1 (1 ulp) error in the comment
about the threshold. The precision of the threshold is unimportant,
but the magic numbers in the code are easier to understand when the
threshold is described precisely.
In e_logf.c, mistranslation of the magic numbers gave an off-by-1
(1 * 16 ulps) error in the intended negative bound for the threshold
and an off-by-7 (7 * 16 ulps) error in the intended positive bound for
the threshold, and the intended bounds were not translated from the
double precision bounds so they were unnecessarily small by a factor
of about 2048.
The optimization of using the simple Taylor approximation for args
near a power of 2 is dubious since it only applies to a relatively
small proportion of args, but if it is done then doing it 2048 times
as often _may_ be more efficient. (My benchmarks show unexplained
dependencies on the data that increase with further optimizations
in this area.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
2**-28 as a side effect, by merging with the float precision version
of tanh() and the double precision version of sinh().
For tiny x, tanh(x) ~= x, and we used the expression x*(one+x) to
return this value (x) and set the inexact flag iff x != 0. This
doesn't work on ia64 since gcc -O does the dubious optimization
x*(one+x) = x+x*x so as to use fma, so the sign of -0.0 was lost.
Instead, handle tiny x in the same as sinh(), although this is imperfect:
- return x directly and set the inexact flag in a less efficient way.
- increased the threshold for non-tinyness from 2**-55 to 2**-28 so that
many more cases are optimized than are pessimized.
Updated some comments and fixed bugs in others (ranges for half-open
intervals mostly had the open end backwards, and there were nearby style
bugs).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to scalbf(), but ldexpf() cannot be implemented in that way since the
types of the second parameter differ. ldexpf() can be implemented as
a weak or strong reference to scalbnf() (*) but that was already done
long before rev.1.10 was committed. The old implementation uses a
reference, so rev.1.10 had no effect on applications. The C files for
the scalb() family are not used for amd64 or i386, so rev.1.10 had even
less effect for these arches.
(*) scalbnf() raises the radix to the given exponent, while ldexpf()
raises 2 to the given exponent. Thus the functions are equivalent
except possibly for their error handling iff the radix is 2. Standards
more or less require identical error handling. Under FreeBSD, the
functions are equivalent except for more details being missing in
scalbnf()'s man page.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
commit broke the 2**24 cases where |x| > DBL_MAX/2. There are exponent
range problems not just for denormals (underflow) but for large values
(overflow). Doubles have more than enough exponent range to avoid the
problems, but I forgot to convert enough terms to double, so there was
an x+x term which was sometimes evaluated in float precision.
Unfortunately, this is a pessimization with some combinations of systems
and compilers (it makes no difference on Athlon XP's, but on Athlon64's
it gives a 5% pessimization with gcc-3.4 but not with gcc-3.3).
Exlain the problem better in comments.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
algorithm for the second step significantly to also get a perfectly
rounded result in round-to-nearest mode. The resulting optimization
is about 25% on Athlon64's and 30% on Athlon XP's (about 25 cycles
out of 100 on the former).
Using extra precision, we don't need to do anything special to avoid
large rounding errors in the third step (Newton's method), so we can
regroup terms to avoid a division, increase clarity, and increase
opportunities for parallelism. Rearrangement for parallelism loses
the increase in clarity. We end up with the same number of operations
but with a division reduced to a multiplication.
Using specifically double precision, there is enough extra precision
for the third step to give enough precision for perfect rounding to
float precision provided the previous steps are accurate to 16 bits.
(They were accurate to 12 bits, which was almost minimal for imperfect
rounding in the old version but would be more than enough for imperfect
rounding in this version (9 bits would be enough now).) I couldn't
find any significant time optimizations from optimizing the previous
steps, so I decided to optimize for accuracy instead. The second step
needed a division although a previous commit optimized it to use a
polynomial approximation for its main detail, and this division dominated
the time for the second step. Use the same Newton's method for the
second step as for the third step since this is insignificantly slower
than the division plus the polynomial (now that Newton's method only
needs 1 division), significantly more accurate, and simpler. Single
precision would be precise enough for the second step, but doesn't
have enough exponent range to handle denormals without the special
grouping of terms (as in previous versions) that requires another
division, so we use double precision for both the second and third
steps.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
distributed non-large args, this saves about 14 of 134 cycles for
Athlon64s and about 5 of 199 cycles for AthlonXPs.
Moved the check for x == 0 inside the check for subnormals. With
gcc-3.4 on uniformly distributed non-large args, this saves another
5 cycles on Athlon64s and loses 1 cycle on AthlonXPs.
Use INSERT_WORDS() and not SET_HIGH_WORD() when converting the first
approximation from bits to double. With gcc-3.4 on uniformly distributed
non-large args, this saves another 4 cycles on both Athlon64s and and
AthlonXPs.
Accessing doubles as 2 words may be an optimization on old CPUs, but on
current CPUs it tends to cause extra operations and pipeline stalls,
especially for writes, even when only 1 of the words needs to be accessed.
Removed an unused variable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
function approximation for the second step. The polynomial has degree
2 for cbrtf() and 4 for cbrt(). These degrees are minimal for the final
accuracy to be essentially the same as before (slightly smaller).
Adjust the rounding between steps 2 and 3 to match. Unfortunately,
for cbrt(), this breaks the claimed accuracy slightly although incorrect
rounding doesn't. Claim less accuracy since its not worth pessimizing
the polynomial or relying on exhaustive testing to get insignificantly
more accuracy.
This saves about 30 cycles on Athlons (mainly by avoiding 2 divisions)
so it gives an overall optimization in the 10-25% range (a larger
percentage for float precision, especially in 32-bit mode, since other
overheads are more dominant for double precision, surprisingly more
in 32-bit mode).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- in preparing for the third approximation, actually make t larger in
magnitude than cbrt(x). After chopping, t must be incremented by 2
ulps to make it larger, not 1 ulp since chopping can reduce it by
almost 1 ulp and it might already be up to half a different-sized-ulp
smaller than cbrt(x). I have not found any cases where this is
essential, but the think-time error bound depends on it. The relative
smallness of the different-sized-ulp limited the bug. If there are
cases where this is essential, then the final error bound would be
5/6+epsilon instead of of 4/6+epsilon ulps (still < 1).
- in preparing for the third approximation, round more carefully (but
still sloppily to avoid branches) so that the claimed error bound of
0.667 ulps is satisfied in all cases tested for cbrt() and remains
satisfied in all cases for cbrtf(). There isn't enough spare precision
for very sloppy rounding to work:
- in cbrt(), even with the inadequate increment, the actual error was
0.6685 in some cases, and correcting the increment increased this
a little. The fix uses sloppy rounding to 25 bits instead of very
sloppy rounding to 21 bits, and starts using uint64_t instead of 2
words for bit manipulation so that rounding more bits is not much
costly.
- in cbrtf(), the 0.667 bound was already satisfied even with the
inadequate increment, but change the code to almost match cbrt()
anyway. There is not enough spare precision in the Newton
approximation to double the inadequate increment without exceeding
the 0.667 bound, and no spare precision to avoid this problem as
in cbrt(). The fix is to round using an increment of 2 smaller-ulps
before chopping so that an increment of 1 ulp is enough. In cbrt(),
we essentially do the same, but move the chop point so that the
increment of 1 is not needed.
Fixed comments to match code:
- in cbrt(), the second approximation is good to 25 bits, not quite 26 bits.
- in cbrt(), don't claim that the second approximation may be implemented
in single precision. Single precision cannot handle the full exponent
range without minor but pessimal changes to renormalize, and although
single precision is enough, 25 bit precision is now claimed and used.
Added comments about some of the magic for the error bound 4/6+epsilon.
I still don't understand why it is 4/6+ and not 6/6+ ulps.
Indent comments at the right of code more consistently.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the second step of approximating cbrt(x). It turns out to be neither
very magic not nor very good. It is just the (2,2) Pade approximation
to 1/cbrt(r) at r = 1, arranged in a strange way to use fewer operations
at a cost of replacing 4 multiplications by 1 division, which is an
especially bad tradeoff on machines where some of the multiplications
can be done in parallel. A Remez rational approximation would give
at least 2 more bits of accuracy, but the (2,2) Pade approximation
already gives 6 more bits than needed. (Changed the comment which
essentially says that it gives 3 more bits.)
Lower order Pade approximations are not quite accurate enough for
double precision but are plenty for float precision. A lower order
Remez rational approximation might be enough for double precision too.
However, rational approximations inherently require an extra division,
and polynomial approximations work well for 1/cbrt(r) at r = 1, so I
plan to switch to using the latter. There are some technical
complications that tend to cost a division in another way.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gives an optimization of between 9 and 22% on Athlons (largest
for cbrt() on amd64 -- from 205 to 159 cycles).
We extracted the sign bit and worked with |x|, and restored the sign
bit as the last step. We avoided branches to a fault by using accesses
to FP values as bits to clear and restore the sign bit. Avoiding
branches is usually good, but the bit access macros are not so good
(especially for setting FP values), and here they always caused pipeline
stalls on Athlons. Even using branches would be faster except on args
that give perfect branch misprediction, since only mispredicted branches
cause stalls, but it possible to avoid touching the sign bit in FP
values at all (except to preserve it in conversions from bits to FP
not related to the sign bit). Do this. The results are identical
except in 2 of the 3 unsupported rounding modes, since all the
approximations use odd rational functions so they work right on strictly
negative values, and the special case of -0 doesn't use an approximation.
|