$include_dir="/home/hyper-archives/boost-users/include"; include("$include_dir/msg-header.inc") ?>
From: Ion Gaztañaga (igaztanaga_at_[hidden])
Date: 2008-04-12 16:48:01
Zeljko Vrba wrote:
> On Fri, Apr 11, 2008 at 10:10:28PM +0200, Ion Gaztañaga wrote:
>> it's possible without the collaboration and processes. It's in the to-do 
>> list, but I don't see how could I implement it.
>>
> POSIX shared memory segments can be grown: enlarge the underlying file with
> ftruncate(), then mmap() the new chunk with MAP_FIXED flag at the end of
> existing mapping; this may naturally fail in which case the application is
> out of luck.  (It may help if the initial mapping's starting address is
> chosen smartly in a platform-specific way.)
hat should we do if
> 
> As for growing it atomically, two things are important to note:
> 
>   1) there's always the first process that wants to enlarge it 
>   2) the other processes first must get some knowledge about segment
>      growth -- this does not happen magically, but is somehow transmitted
> 	 by the process that has grown the segment
Ok. One process takes a lock tries to introduce new elements so 
increases the segment and adds more elements in a shared memory list. 
Then unlocks the mutex. New elements allocated in the list can be 
introduced in the front of the list. Other processes, lock the mutex and 
traverse the list and crash. How can we stop all processes to notify 
them that the mapping should be increased? I can guess that we could 
catch SIGSEV, see if new mappings have been added in a shared segment 
information and try to get new mappings map them again, and retry the 
access. But this is easier to say than to write.
> 2) relies on the assumption that a correct program can't know about address
> X until it has malloc()'d it.  With multiple processes this assumption is
> extended to that a process can't know about address X until _some_ process
> has malloc()'d it _and_ communicated it to others.
With malloc all threads have atomically mapped that memory in their 
address space because they shared the address space. Once one threads 
successes doing it, all threads have succeed. Doing this with shared 
memory is a lot more difficult.
> So, for N processes, have in the initial mapping:
> [...]
I think I can find some weaknesses to this process, but I'll try to 
think a bit more about it.
Regards,
Ion