$include_dir="/home/hyper-archives/boost/include"; include("$include_dir/msg-header.inc") ?>
Subject: Re: [boost] [thread] On shared_mutex
From: Jeffrey Lee Hellrung, Jr. (jhellrung_at_[hidden])
Date: 2010-11-28 21:27:41
On 11/28/2010 4:33 PM, Howard Hinnant wrote:
> On Nov 28, 2010, at 7:07 PM, Jeffrey Lee Hellrung, Jr. wrote:
>
>> Next topic:
>>
>> I'm now confused with the implementation of try_lock_until...let's start with the first line:
>>
>> std::unique_lock<L0>  u0(l0, t)
>>
>> So...L0 is itself a lock (specifically an ExclusiveLock, I would guess, although the second implementation of average is using a "Lock"...typo?),
>
> Yes, type-o.  Thanks for catching that!  Each of those Lock's should be ExclusiveLock.  My intent was to reuse A's typedefs from a few paragraphs above. I've updated:
>
> http://home.roadrunner.com/~hinnant/mutexes/locking.html#Upgrade
>
>> so I don't think this makes sense.  Should be something like l0.try_lock_until(t) ???
>
> Actually no.  This is one of the things that makes generic locking algorithms so cool.  L0 is what we now call Lockable (thanks to Anthony), or maybe more correctly TimedLockable.  L0 could be a mutex, or could be a lock.  This algorithm doesn't know or care.  Either way it creates the lock:  std::unique_lock<L0>.  And the constructor called by:
>
>     std::unique_lock<L0>  u0(l0, t);
>
> will internally call l0.try_lock_until(t).  Both std::unique_lock, and std::timed_mutex support this syntax, as do also ting::shared_lock, ting::shared_mutex, etc.
>
> The reason to wrap L0 up in std::unique_lock<L0>  is exception safety.  If l1.try_lock_until(t) either returns false or throws, then u0.~std::unique_lock<L0>() unlocks l0 on the way out.  That way you get either all or none L's locked on normal or exceptional return of try_lock_until.
Got it.
Please excuse the basic concurrent programming questions, I just want to 
make sure I understand the exposition correctly...
It looks like this implementation of try_lock_until could "fail" under 
contention, i.e., neither contending thread would perform the averaging 
computation.  Suppose thread 1 calls a1.average(a2) and, simultaneously, 
thread 2 calls a2.average(a1).  It could happen that thread 1 locks 
a1.mut_ in the first line of try_lock_until, and thread 2 similarly 
locks a2.mut_, and then each thread is waiting to acquire the other 
mutex.  Now they both time out waiting for the other mutex before either 
is able to unlock their held mutex, so neither thread enters the average 
computation.  Is this accurate?  If so, this seems undesirable, though 
it isn't nearly as bad as a deadlock scenario.  Is it possible to 
guarantee this won't happen?
Up to this point, the proposed changes to the Boost.Thread 
synchronization concepts and types look reasonable, but I haven't done 
anything too advanced with mutexes ever...
- Jeff