$include_dir="/home/hyper-archives/boost/include"; include("$include_dir/msg-header.inc") ?>
From: bill_kempf (williamkempf_at_[hidden])
Date: 2002-03-14 11:14:23
--- In boost_at_y..., "danl_miller" <danl_miller_at_b...> wrote:
> --- In boost_at_y..., "bill_kempf" <williamkempf_at_h...> wrote:
> > How do you disagree with anything I said?  Nothing in your 
> > description differs from anything I've said, nor does it actually 
> > mention any need for any specific mutex scheduling.  In a thread
> pool 
> > it doesn't matter what thread acquires the mutex, and thus 
acquires 
> > the "job".  All that matters is that *some* thread acquires the
> mutex 
> > and thus the "job".  It doesn't even matter if a thread (or set 
of 
> > threads) is starved in this case, because the jobs are still 
being 
> > executed in a timely manner.  A thread can remain idle 
indefinately 
> > and not cause any problems with the operation of the thread pool.
> > 
> > Bill Kempf
> 
>   Bill, you claimed in the following
> 
> > > --- In boost_at_y..., "bill_kempf" <williamkempf_at_h...> wrote:
> > > > LIFO must be applied to the queued jobs, but there need not 
be a
> > > LIFO 
> > > > gaurantee on any synchronization primitives to do this.  So I 
> > think 
> > > > this is a different topic from what was being discussed.
> 
> that a synchronization primitive which unpends its pending threads 
in
> LIFO-order was unrelated to scheduling jobs in a thread-pool which 
is
> what Darryl & I were delving into in the postings included in their
> entirety below.
One of us is not understanding the other, so let me try again.
The threads *do not* need to be unpended in LIFO order.  Doing so 
actually produces a *less* efficient implementation, since a very 
strict ordering policy must be obeyed, and it provides *no* benefit 
to the user.  What needs to be done in some order is the *job*.  A 
strong semaphore is not needed to accomplish this.  A weak mutex and 
condition variable is all that's required.  That a pending thread may 
starve in this case is irrelevant and of no concern.
In other words, the scheduling policy of the threads and/or the 
synchronization primitives used, is irrelevant to the implementation 
of an efficient thread pool.
> That different-topic claim is false.  I provided a factual
> counter-example which disspells your false claim.  The counter-
example
> showed precisely how a thread-pool whose threads pend on a single
> message-queue is in fact a thread-pool which
> 
>   1) is a fully-functioning thread-pool which schedules its threads;
> 
>   2) makes direct & intimate use of an interthread messge-queue 
which
>   in turn is based directly & intimately on a thread-synchronization
>   primitive (i.e., semaphore, or if you prefer a reinvented 
semaphore
>   implemented by condition-variables, for those people who do not
>   consider semaphore in the set of thread-synchronization primitives
>   which was the line of reasoning which lead to semaphore's
>   unfortunate removal);
> 
>   3) permits a LIFO ordering/scheduling-priority or FIFO
>   ordering/scheduling-priority or an undefined
>   ordering/scheduling-priority
> 
>   4) makes use of a thread-sychronization primitive as the one &
>   only way that the LIFO vs FIFO vs undefined
>   ordering/scheduling-priority is provided in this thread-pool.
I followed you until (4).  The synchronization primitive is *not* 
what determines the ordering of the jobs... it only determines the 
ordering of the threads, which is irrelevant to the use of a thread 
pool.  It doesn't matter what thread runs a job, it only matters that 
the jobs are dispatched to *some* thread in a LIFO/FIFO (usually 
FIFO, actually) order.
 
> Because I have explained a decades-old thread-pool design which is 
in
> common practice in embedded real-time systems, I have disproved your
> false claim that the ordering-of-unpending/scheduling-priority of
> thread-synchronization primitives is a different topic from
> LIFO-ordering of jobs in thread-pools by showing how they can be
> effectively used together.
The fact that they can be used together does not equate to them being 
the same topic.
> [I offer vacuous agreement with your claim that there are 
alternatives
> to a LIFO guarantee on thread-syncrhonization primitives.  In that
> artificially-narrow (inferior-)alternatives-do-exist sense, I agree
> that we do not absolutely "need" such guarantees on
> thread-synchronization primitives because a god-controller executing
> in user-space can be written which could perform the scheduling of
> threads in the thread-pool more complexly & less efficiently than 
the
> threads-pend-on-a-single-message-queue lightweight technique of
> teaching the kernel/operating-system to schedule a thread from the 
set
> of message-queue-pending-threads for us.  But using a different 
sense
> of the word "need", I vehemently disagree.  We need the
> highly-efficient time-honored RTOS-style interthread message-queue
> concept if for no other reason than to implement an efficient
> lightweight thread-pool where all threads pend on a single
> message-queue which doles out the work.  In turn we need the 
semaphore
> concept to implement the producer/consumer pattern in message-queue.
> Also we need the semaphore concept to implement RTOS-style
> pool-allocation pattern in an efficient manner in all pools of 
shared
> resources-other-than-threads, which obviously are allocated by some
> sort of pool-controlling unit(s) of software.  (Why "other than
> threads"?  Because thread-pool scheduling is a truly unique case in
> that the scheduling of threads from the set of threads pending on 
that
> single message-queue can be performed by the beautiful intrinsic
> interaction between the semaphore thread-synchronization primitive
> teaching the operating-system/kernel which thread from the set of
> pending threads to allocate&unpend as one unified operation without
> the assistance of an external god-controller executing in yet 
another
> thread.)  Speaking of needs, note that we do not need the
> message-queue (and thus its internal semaphore) to have LIFO order
> unless paging in LRU threads in an operating-environment equipped 
with
> VM (or with some other penalty to threads which have not executed
> recently).]
The main thing I disagree with what you just said has to do with 
message queues.  With a message queue the scheduling policy for the 
*threads* may well be something that's important to control.  
However, a thread pool uses a very specialized message queue, and the 
scheduling of the threads for this case is irrelevant.  It doesn't 
matter if there are 50 threads pending and only one thread ever runs 
the jobs, so long as the jobs never sit in the queue while there's an 
available thread.  The *only* thing that needs scheduling is the 
jobs, which *will* be scheduled by the mere fact that they are placed 
in a queue.
Bill Kempf