summaryrefslogtreecommitdiffstats
path: root/lib/msun/src/s_rintl.c
Commit message (Collapse)AuthorAgeFilesLines
* Optimize the conversion to bits a little (by about 11 cycles or 16%bde2008-02-221-5/+13
| | | | | | | | | | | | on i386 (A64), 5 cycles on amd64 (A64), and 3 cycles on ia64). gcc tends to generate very bad code for accessing floating point values as bits except when the integer accesses have the same width as the floating point values, and direct accesses to bit-fields (as is common only for long double precision) always gives such accesses. Use the expsign access method, which is good for 80-bit long doubles and hopefully no worse for 128-bit long doubles. Now the generated code is less bad. There is still unnecessary copying of the arg on amd64 and i386 and mysterious extra slowness on amd64.
* Optimize the fixup for +-0 by using better classification for this casebde2008-02-221-2/+4
| | | | | and by using a table lookup to avoid a branch when this case occurs. On i386, this saves 1-4 cycles out of about 64 for non-large args.
* Fix rintl() on signaling NaNs and unsupported formats.bde2008-02-221-5/+3
|
* Optimize this a bit better.das2008-01-151-13/+18
| | | | Submitted by: bde (although these aren't all of his changes)
* Implement rintl(), nearbyintl(), lrintl(), and llrintl().das2008-01-141-0/+77
Thanks to bde@ for feedback and testing of rintl().
OpenPOWER on IntegriCloud