$include_dir="/home/hyper-archives/boost-users/include"; include("$include_dir/msg-header.inc") ?>
From: Rainer Deyke (rainerd_at_[hidden])
Date: 2008-08-29 16:53:28
troy d. straszheim wrote:
> Rainer Deyke wrote:
>> I don't think performance should be the overriding concern, especially 
>> since byte-shuffling is very fast.  
> 
> But it isn't fast.
It is when compared to the overhead of IO (disk or socket, possibly even 
memory).
>  If the necessity of bitshuffling makes it impossible to
> serialize, say a vector<double> via the optimized array handling, you
> could easily be talking about a factor of 10 in speed.
I think here you are talking about the overhead of a single write 
operation versus multiple write operations on the underlying stream, 
correct?
It's true that the standard stream operations can be slow, but that is a 
separate problem from the actual byte shuffling and should be solved 
separately.  Maybe this problem could be avoided by using a 
std::vector<char> instead of a stream object for the actual 
serialization and then dumping it all at once.
(It is not reasonable to just dump in-memory objects to a stream in any 
portable format, binary or text.)
> The problem with option 3 is that it introduces a potential source of
>  > bugs that only manifests when moving
>> between platforms with different endianness.  I'd prefer option 1, 
>> precisely because it requires shuffling on the most common platforms 
>> so any bugs in the shuffling code are sure to be caught early.
> 
> Actually this is very easy to test for, even if you don't have machines 
> of the
> other endianness available.   (the md5sum of the generated archive must 
> match
> for all platforms, and these sums can be checked in to svn)
I though option 3 was to write little-endian archives on little-endian 
machines and big-endian archives on big-endian machines?  If so, the 
generated archives would /not/ be the same.  Hence the potential source 
of bugs.
-- Rainer Deyke - rainerd_at_[hidden]