$include_dir="/home/hyper-archives/boost/include"; include("$include_dir/msg-header.inc") ?>
Subject: Re: [boost] [fiber] Suggestions regarding asio
From: Nat Goodspeed (nat_at_[hidden])
Date: 2016-09-24 22:26:07
On Thu, Sep 22, 2016 at 11:46 PM, Tatsuyuki Ishi
<ishitatsuyuki_at_[hidden]> wrote:
> > When considering using multiple threads with an io_service passed to a
> custom Boost.Fiber scheduler, I keep bumping into the problem that a given
> fiber scheduler instance must always run on its original thread -- but I
> know of no way to give an Asio handler "thread affinity." How do we get
> processor cycles to the scheduler on the thread whose fibers it is managing?
> Since we only poll on one master thread, we can easily get affinity.
>
> Maybe pseudocode is easier:
> * We have two class, one for main polling, others waiting for job.
> * The first one, polls from asio using the same way as the example (poll,
> or run_one).
> while(true)
> {
> if(!run_one()) stop_other_threads(); // No more work in asio, give up
> waiting
> poll(); // flush the queue
> // Now we should have some fibers in the queue.
> yield(); // continue execution
> }
I think I see. The while loop above is in the lambda posted by
boost::fibers::asio::round_robin::service's constructor?
Would it still be correct to recast it as follows?
while (run_one())
{
poll(); // flush the queue
// Now we should have some fibers in the queue.
yield(); // continue execution
}
stop_other_threads(); // No more work in asio, give up waiting
We might want to post() a trivial lambda to the io_service first
because at the time that lamba is entered, the consuming application
might or might not already have posted work to the io_service.
> * Other threads, uses condvar to wait on the ready queue.
> awakened(){queue.push_back(f); cv.notify_[one,all](); iosvc.post([]{});}
> pick_next(){while(!jobs_available){cv.wait();} return job;}
Okay: you're suggesting to share the ready queue between multiple
threads. One of them directly interacts with the io_service in
question; the others only run ready fibers.
I'm nervous about waiting on the condition_variable in pick_next()
instead of suspend_until() because I'm concerned about losing the
fiber manager's knowledge of when it might next need to run a ready
fiber -- due to either sleep or timeout. In fact, as I think about it,
we'd probably need to share among participating threads an asio timer
and a current earliest time_point. Each participating thread's
suspend_until() would check whether the passed time_point is earlier
than the current earliest time_point, and if so reset the shared
timer. (I think for that we could get away with a direct call into the
io_service from the worker thread.)
I haven't quite convinced myself yet that that would suffice to wake
up asio often enough.
It sounds interesting, and probably a useful tactic to add to the
examples directory. I don't have time right now to play with the
implementation. If you get a chance to make it work before you hear
back from me, please post your code.
Thank you very much for your suggestion!