$include_dir="/home/hyper-archives/boost/include"; include("$include_dir/msg-header.inc") ?>
From: Ed Brey (edbrey_at_[hidden])
Date: 2001-11-07 09:27:48
From: "Kevin Lynch" <krlynch_at_[hidden]>
> 
> > 1. Many constants are the return values of functions taking a certain parameter.  In a 
> > perfect world, these wouldn't be constants at all, but would simply be written sqrt(2) or
> > gamma(1/3.), or the like.  
> 
> > 2. To keep at least some uniformity, for each constant that is a function return value, 
> > the actual function should be available.  Sqrt and cos are done for us.  But gamma 
> > constants should not be added without a gamma function.
> 
> > 3. Given the above observations, only group 1 (as I've suggested) constants should be
> > considered for standardization.  The trend should be to move the language away from 
> > performance hints to compilers (e.g. inline), rather than toward it.  Intrinsicly 
> > performing high-quality compile-time sqare root should become a QoI issue for compilers 
> > that users can count on, just like what is possible today for operator*(double,double).  
> 
> I read your post with great interest, and agreed in principle with much
> of it... however  (going out of order....  :-)
> 
> There are a number of issues here that are thorny and subtle, I think,
> that need to be considered.  While I agree with you that in a perfect
> world the math functions would provide some sort of guarantees on their
> precision, such guarantees are difficult in general to obtain, since
> different algorithms generate different precision/run time tradesoffs,
> perhaps even for different ranges of the arguments (eg. algorithm A may
> be much faster near 1 than near 2 for equally good precision, while B
> may provide equal speed everywhere but be much more accurate near 1 than
> near 2).  It is often the case that you trade off substantial run time
> for moderate precision gains and I don't think that the standard should
> be mandating precision or run time guarantees in that light.  It would
> be much better, in my opinion, if the standard required that
> implementations document the precision/run time parameters of their math
> functions in some detail; many standard library implementations are hard
> to use in high performance computing because just such information is
> missing in the library documentation  (if you don't understand the
> library, and a 5% performance penalty is going to cost you two weeks of
> runtime, you're going to end up paying big bucks for specialized
> libraries that give you the information you need, and many of us can't
> afford that sort of expense)
These are good observations.  The details about how much precision to expect and/or require from a compiler goes beyond what I know enough to wade into.  It would be great to see compilers ship with arbitrary precision engines for their compile-time calculations, but I don't know whether such is practical.
What is constant versus run-time is easier to specify, however.  See below.
 
> However, if you are only talking about compile time translation of
> functions taking literal arguments, that may be a different story; it
> might not be unreasonable to expect compilers (some day) to generate
> precise values for expressions that are compile time calculable;
> sqrt(2.) is an excellent example, but what about sqrt(5./13.)?  What
> about sqrt(3./2.)?
> Since sqrt(3./2.) = sqrt(3.)/sqrt(2.), should we expect such
> simplifications?  How far should the standard go?   I like the idea in
> concept, but I don't know how far we can expect compiler writers to go
> in this direction (Even though I'd like to see them go as far as
> possible!  but then again, I'm no compiler writer  :-)  ; after all,
> providing much of this optimization would require them to implement
> computer algebra systems in their optimizers, and I don't know if
> they'll want to be doing that...
Currently C++ requires compilers to perform compile-time evaluation of certain operations.  For example, the following are legal C++ statements:
int a[3 / 2];
int a[int(3./2.)];
Given that a compiler must support such evaluation at compile-time anyway, it is reasonable to expect that a compiler will generate code that reflects compile-time computation if such expressions are used in a calculation.  To get the same expectation of intrinsic support for sqrt, all that needs to be done is to classify its output as a constant, so long as its input is a constant.  Thus, the following would also become legal:
int a[int(std::sqrt(2.))];
Once that is required, it will be reasonable to assume compile-time computation of a sqrt function taking any constant parameter.  So tricks like knowing that sqrt(3./2.) = sqrt(3.)/sqrt(2.) will not be needed.  The compiler will evaluate 3./2. and then apply sqrt to that result, analogous to how the compiler evaluates 1. / (3./2.).
> The other issue I thought of relates to your classification of
> constants:  pi versus sqrt(2.), for example.  You stated that you'd like
> to see the library, if possible, maintain a "function call interface" to
> those things that look like function calls, and provide only those
> "constants" that are most "natural as constants".  The problem I see is
> that this isn't as clear a distinction as you might like:  pi = cos(-1),
> e = exp(1), i = sqrt(-1), (ok, pi may not pass your "naturalness"
> criterion, but exp(1) certainly should) etc.  The problem only gets
> worse as we consider more constants, and I'm not sure if there might be
> a better definition of the separation.  I agree, however, that there is
> in principle little "need" to standardize those constants that are a
> simple sequence of +-/* away from other constants  (if pi/2 loses
> substantial precision, then your platform is really broken, and this
> library isn't going to help you anyway).
There definately is room for prespective here.  I don't have a strong enough mathematical background to know if there is a "true" perspective.  Personally, I see pi as fundimental, and cos(-1) == pi as a concidence, and e as fundimental, and exp(1) as a shortcut for pow(e,1).  However, I can't provide any evidence as to why this perspective is better than another one; it's just based on what I've been exposed to going through school.
> Finally, I disagree that only those constants which are function return
> values for which the actual function is available should be considered
> (eg. sqrt(2) would be in, but not gamma(1./3.))  Consider that many of
> the functions that are "missing" will likely be in the TR, since it will
> likely suck in the C99 library (cbrt, tgamma, erf, etc. will all be
> coming in....).
Please help me understand what's going on in C99 here.   Are you saying that it will be providing _functions_, like cbrt(double), eft(double), etc.  If so, then this is great: these functions will be available for C++ for generic operation.  And if a need is shown, some precomupted results can be made available too (e.g. cbrt_2).  What I don't want to see is the inclusion of cbrt_2 without a cbrt function.  And of course, I'd prefer even more not have cbrt_2 at all, but only if its exclusion is practical.