From: Glen Fernandes (glen.fernandes_at_[hidden])
Date: 2019-09-23 11:05:36


On Mon, Sep 23, 2019 at 6:44 AM Glen Fernandes wrote:
> On Mon, Sep 23, 2019 at 5:19 AM Dominique Devienne via Boost

> > Also, in client/server communications, it's less often a few huge JSON
> > documents, but rather lots of small documents, so the constant "startup"
> > time of the parser matters too, and in that same vein, a PULL-parser that
> > allows to build the native data-structures directly, rather than the
> > DOM-like
> > approach of fully converting the document to a built-in JSON object, and
> > then convert that to the native-structure, avoids the temp document, which
> > is especially useful for large documents.
> >
>
> It looks like nlohmann/json now has this kind of API too:
> 1. https://tinyurl.com/nl-json-parse
> 2. https://tinyurl.com/nl-json-parse-callback
>

That is the use case that I find more interesting. If you want to
decouple your data representation in your program from the
serialization format.

e.g. In memory if your data structure is something like:
    vector<map<string, pair<int, double> > >.

This could be expressed in json by something like:
    [ { "key": [1, 0.01], "def": [9, 0.37], ... },
        { "xyz": [5, 1.25], "abc": [2, 4.68], ... },
     ... ]

Your load it from some JSON file. Your don't want to store/use it in
your program as a SomeLibrary::JsonArray. Past the point of
serialization it should be your own data structures. Of course your
could always convert it from SomeLIbrary::JsonArray to your own
structures, but that's overhead you don't need if there's hundreds of
megabytes worth of content in that data.

Glen