$include_dir="/home/hyper-archives/boost/include"; include("$include_dir/msg-header.inc") ?>
From: Esteve Fernandez (esteve_at_[hidden])
Date: 2008-06-09 15:30:38
El Lunes 09 Junio 2008 19:32:20 Robert Ramey escribió:
> My question has been and still remains:
>
> Suppose that one has serialization specified for all his classes. He can
> now serialize to any archive classs. Now he says - oh someone wants
> YAML - use that - Damn. Using this one archive requires that I go back
> and change the serialization of all my classes to support this one
> additional
> archive. This imposes special requirements on serializable types for
> a particular archive class - Thus coupling two concepts where
> great effort has been expended to maintain them as orthogonal.
But, doesn't the XML archive force you to change your serialization as well? I
mean, you need to serialize every member with BOOST_SERIALIZATION_NVP. If I
serialized my objects with the text or the binary archives, I would have to
change their serialization if I wanted to use the XML archive.
> I would guess that if YAML needs anything, it would be some
> unique class identifier. perhaps you might consider the class
> ID - maybe enclosed in quotes.
Sorry if I didn't explain myself properly. Maybe it's a bit easier with an
example, here's an object serialized in YAML with pyyaml:
!!python/object:namespace.A
a_string: some string
an_int: 0
an_object: !!python/object:namespace.B
a_float: 3.14
I know that Boost.serialization doesn't try to deserialize any kind of
document, even if it's well formed. "Boost.serialization only (de)serializes
Boost.serialization documents" has become my mantra :-)
I can replace those !!python/object:namespace.{A,B}
with !!boost/object:class_id_{0,1} in the worst case, but would prefer to
expose class names if I can.
> Natually, discussion of all the above issues has to be
> part of the documentation.
>
> Good Luck.
Thanks for taking the time for explaining all this. It was very constructive,
really.
Cheers.