$include_dir="/home/hyper-archives/boost-users/include"; include("$include_dir/msg-header.inc") ?>
Subject: Re: [Boost-users] boost:interprocess sizeof(list<Type>::Node)
From: Ion Gaztañaga (igaztanaga_at_[hidden])
Date: 2009-03-06 11:25:46
waterbed wrote:
> Hello,
> 
> My problem is really basic :
> I have N objects of type Type to store in a boost::interprocess::list<Type>.
> The question is easy : How many bytes should I provide to the
> managed_mapped_file, in order it fits correctly?
> 
> 
> So, I supposed I had to store (let's assume bi = boost::interprocess):
>   + The list allocator : sizeof( bi::allocator<Type,
> bi::managed_shared_memory::segment_manager> )
>   + The struct of the list itself : sizeof( bi::list<Type,
> bi::allocator<Type, ...> > )
>   + The N nodes of the list containing each one of my objects : sizeof(
> bi::list<Type, ...>::Node)
> 
> First issue : bi::list<Type, ...>::Node is private. Fine, I changed it as
> public.
> Second issue : sizeof(bi::list<Type, ...>::Node) was not enough, actually I
> can't add N objets, I was just able to allocate (N - x) objets
> Third issue : so I tried to fix the value of x, but x is not constant, it
> changes according to N...
> 
> Thus I think, I made a mistake but where? or I forgot something but what?
When you allocate memory, it allocates more space than the space of the 
node:
-> Alignment: memory allocators allocate raw memory so they usually 
allocate data aligned to a value that allows to build any data type 
(usually 8 bytes).
-> Metadata: memory allocators needs some metadata to be able to 
deallocate an allocated block and merge free blocks. This usually is 4-8 
bytes per-allocation (example of metadata: the size of the memory block 
in bytes).
-> The memory allocator usually has a minimum memory allocation size to 
be able to allocate a block because free blocks are stored in a data 
structure and the algorithm can't create free blocks smaller than a 
size. Depending on the algorithm to store free nodes, it can be 8-16 
bytes. Interprocess uses red-black trees to store free nodes by size (12 
bytes).
-> Fragmentation, depending on the allocation pattern, you might have 
free memory holes but not enough to allocate a node.
And don't forget managed_shared_memory also needs some metadata to build 
name-value indexes.
When allocating contiguous nodes (imagine you allocate the whole list 
with no other allocation between node insertion) default interprocess 
memory allocator should have an overhead of 4 bytes per allocation in 32 
bit systems and an alignment of 8 bytes (in windows, in linux-gcc max 
aligment seems to be 4 bytes). The same as usual allocators like 
dlmalloc. A think it's pretty efficient ;-) Of course you should add 
space for the name-object map, and other goodies provided by managed 
segments.
That might produce a waste 8 bytes per node. If you want to reduce the 
waste, use a pool allocator but take in care that they allocate arrays 
of nodes and then the allocator returns pointers to them, so 
anticipating the amount of memory the pool you will need (when will it 
allocate the array) is even harder.
Try to add the size of two-three pointers per node and I think that 
should be pretty accurate.
Best,
Ion