$include_dir="/home/hyper-archives/boost/include"; include("$include_dir/msg-header.inc") ?>
Subject: Re: [boost] [review] Multiprecision review (June 8th - 17th, 2012)
From: Steven Watanabe (watanabesj_at_[hidden])
Date: 2012-06-25 00:33:08
AMDG
I really haven't had time to go through this
library as thoroughly as I would have liked,
but, since it's the last day of the review:
*I vote to accept Multiprecision into Boost*
Here are my comments on the code so far.
It's not very much, since I only got
through a few files. I'll try to expand
on this as I have time, but no promises.
Revision: sandbox/big_number_at_79066
concepts/mp_number_architypes.hpp:
* The file name is misspelled. It should
be "archetypes."
* The #includes aren't quite right. The
file does not use trunc, and does use
fpclassify.
* line 28: mpl::list requires #include <boost/mpl/list.hpp>
* line 79: stringstream requires #include <sstream>
depricated:
* Should be deprecated.
detail/functions/constants.hpp:
* line 12: typedef typename has_enough_bits<...> pred_type;
If you're going to go through all this trouble to make
sure that you have a type with enough bits, you shouldn't
assume that unsigned has enough. IIRC, the standard
only guarantees 16 bits for unsigned.
* line 38: if(digits < 3640)
Ugh. I'm assuming that digits means binary digits
and 3640 = floor(log_2 10^1100)
* line 51: I would appreciate a brief description
of the formula. This code isn't very readable.
* line 105: It looks like this is using the formula
e \approx \sum_0^n{\frac{1}{i!}} = (\sum_0^n\frac{n!}{i!})/n!
and induction using the rule:
\sum_0^n\frac{n!}{i!} =
(\sum_0^(n-1)\frac{(n-1)!}{i!}) * n + 1
Please explain the math or point to somewhere
that explains it.
* line 147: ditto for pi
* line 251: calc_pi(result, ...digits2<...>::value + 10);
Any particular reason for +10? I'd prefer
calc_pi to do whatever adjustments are
needed to get the right number of digits.
detail/functions/pow.hpp:
* line 18 ff: pow_imp
I don't like the fact that this implementation
creates log2(p) temporaries on the
stack and multiplies them all together
on the way out of the recursion. I'd
prefer to only use a constant number
of temporaries.
* line 31:
I don't see how the cases > 1 buy you anything.
They don't change the number of calls to eval_multiply.
(At least, it wouldn't if the algorithm were
implemented correctly.)
* line 47:
???: The work of this loop is repeated at every
level of recursion. Please fix this function.
It's a well-known algorithm. It shouldn't be
hard to get it right.
* line 76: pow_imp(denom, t, -p, mpl::false_());
This is broken for p == INT_MIN with twos-complement
arithmetic.
* line 185: "The ldexp function..."
I think you mean the "exp" function.
* line 273: if(x.compare(ll) == 0)
This is only legal if long long is in T::int_types.
Better to use intmax_t which is guaranteed to be in
T::int_types.
* line 287: const bool b_scale = (xx.compare(float_type(1e-4)) > 0);
Unless I'm missing something, this will always be
true because xx > 1 > 1e-4 as a result of the
test on line 240.
* line 332: if(&arg == &result)
This is totally unnecessary, since you already copy
arg into a temporary using frexp on line 348.
* line 351: if(t.compare(fp_type(2) / fp_type(3)) <= 0)
Is the rounding error evaluating 2/3 important?
I've convinced myself that it works, but it
took a little thought. Maybe add a comment to
the effect that the exact value of the boundary
doesn't matter as long as its about 2/3?
* line 413: "The fabs function "
s/fabs/log10/.
* line 464: eval_convert_to(&an, a);
This makes the assumption that conversion
to long long is always okay even if the
value is out of the range of long long. This
assumption makes me nervous.
* line 465: a.compare(an)
compare(long long)
* line 474:
eval_subtract(T, long long)
Also, this value of da is never used. It
gets overwritten on line 480 or 485.
* line 493: (eval_get_sign(x) > 0)
This expression should always be true at this point.
* line 614: *p_cosh = x;
cosh is an even function, so this is wrong when
x = -\inf
* line 625: T e_px, e_mx;
These are only used in the generic implementation.
It's better to restrict the scope of variables
to the region that they're actually used in.
* line 643: eval_divide(*p_sinh, ui_type(2));
I notice that in other places you use ldexp for
division by powers of 2. Which is better?
detail/mp_number_base.hpp:
* line 423: (std::numeric_limits<T>::digits + 1) * 1000L
This limits the number of bits to LONG_MAX/1000.
I'd like there to be some way to know that as
long as I use less than k bits, the library
won't have any integer overflow problems. k
can depend on INT_MAX, LONG_MAX, etc, but the
implementation limits need to be clearly documented
somewhere. I really hate the normal ad hoc
situation where you just ignore the problem and
hope that all the values are small enough to avoid
overflow.
Other notes:
Backend Requirements:
* Do we always require that all number types
be assignable from intmax_t, uintmax_t,
and long double? Why must an integer
be assignable from long double?
* The code seems to assume that it's okay to
pass int as the exp parameter of ldexp.
These requirements don't mandate that this
will work.
In Christ,
Steven Watanabe