$include_dir="/home/hyper-archives/boost/include"; include("$include_dir/msg-header.inc") ?>
Subject: Re: [boost] C++ announcements coming tomorrow
From: Paul Mensonides (pmenso57_at_[hidden])
Date: 2012-11-05 09:46:36
On 11/5/2012 4:16 AM, Olaf van der Spek wrote:
> On Sun, Nov 4, 2012 at 9:21 PM, Paul Mensonides <pmenso57_at_[hidden]> wrote:
>> Yes, it does.  Taking aim at GCC instead, a huge amount of GNU code will not
>> compile without --std=gnu++11 instead of --std=c++11.  Even more of it won't
>
> If gnu++11 is used, the goal of the authors isn't portable code, is it?
> Basically you'd like to 'force' them to use c++11 by taking away the extensions?
For any extension that is just syntactic sugar or not desperately 
required (as hardware vectorization may be), yes.  Their existence is 
damaging in the long term.  Architecture-specific code is the minority. 
  Besides those comparatively rare cases, platform-specific code (and by 
"platform" I don't mean "compiler") should be the only C++ code that is 
non-portable.  Everything else should be portable almost by accident.  A 
particular author might be shortsighted and not care about portability, 
but portability in the general case is what has to happen for computer 
science (not just C++) to really move forward.  Platforms, compilers, 
and, in most cases, architectures need to be drop-in, interchangeable 
components, not foundations.
Even with hardware vectorization, I'm not sure how much it should be 
used at present other than in critical places.  The reason I say this is 
that I don't think we as academia/industry really know how to do 
multiprocessing (including vectorization) yet, and I suspect that it 
will end up being such that the overall way-of-coding, structuring data, 
etc. is significantly different--if not radically different.  As an 
aside, I actually think this one thing might be the one that kills off 
all current major (which implies imperative) languages--including C++.
> For an app it's easy to depend on another lib. But for a lib,
> depending on another lib that might not be easily available /
> installable can be problematic.
In some ways, Windows deployment is easier because you can distribute 
in-directory DLLs for many libraries that don't require their own 
installation programs and largely avoid DLL hell.  In many ways, the 
Linux model is better because it has better facilities for reuse, but 
dealing with C++ ABI issues and version availablity issues can be a 
nightmare also.  Granted, you can do the same thing as with Windows with 
rpath if you really want to, but then you throw away memory and, usually 
less importantly, disk reuse (just as you get with Windows with 
in-directory DLLs).
>> OTOH, Linux has its issues also such as its massive assumption of a
>> system-wide particular version of GCC with no C++ ABI encoding in the shared
>> object versioning.  The Linux package management models (i.e. apt-get, rpm,
>> emerge) are also fundamentally broken because they don't scale
>> (many-to-one-to-many vs many-to-many).
>
> They could be better but I don't think calling them fundamentally
> broken is fair.
Sorry, I didn't mean the tools themselves.  I'm referring to the single 
points of update and/or vetting of the content that those tools work 
with (at least, via official repositories).  They are fundamentally 
broken because all updates are essentially serialized through a single 
point.  That just doesn't scale despite herculian effort, and most Linux 
distros are way behind the most current releases of most software 
because of that.  Pressure for throughput at that point far outweights 
the available throughput--the outcome is inevitable.  Currently, 
deploying on Linux via any of the package management systems is a 
nightmare unless you only need old compilers and only rely on old 
versions of other libraries.  Besides the boilerplate distro differences 
in how one specifies a package, you run smack into version availability 
issues (related to which versions have so far gone through the single 
point) and ABI issues.
For the ABI-related stuff, the so versioning model could include an ABI 
stamp of some kind.  So, for example, I could build and install Boost 
1.52 with both --std=c++98 and --std=c++11 and have them coexist.  For 
handling the creation and processing of the (e.g.) dependency graph 
without single-sourcing it, I don't have any particularly great ideas.
Regards,
Paul Mensonides