$include_dir="/home/hyper-archives/boost/include"; include("$include_dir/msg-header.inc") ?>
Subject: Re: [boost] [Fibers] Performance
From: Gavin Lambert (gavinl_at_[hidden])
Date: 2014-01-20 18:44:03
On 20/01/2014 20:07, Quoth Oliver Kowalke:
> with coroutines you can't use the one-fiber-per-client because you are
> missing the synchronization classes.
You can if you don't require synchronisation.  Something that's just 
serving up read-only data (eg. basic in-memory web server) or handing 
off complex requests to a worker thread via a non-blocking queue, would 
be an example of that.  Every fiber is completely independent of every 
other -- they don't care what the others are up to.
The thing is though that the main advantage of thread-per-client is 
handling multiple requests simultaneously.  And you lose that advantage 
with fiber-per-client unless you sprinkle your processing code with 
fiber interruption points (either manually or via calls to the sync 
classes you're proposing) -- and even then I think that only provides 
much benefit for long-running-connection protocols (like IRC or telnet), 
not request-response protocols (like HTTP), and where individual 
processing time is very short.
For a system that has longer processing times but still wants to handle 
multiple requests (where processing time is CPU bound, rather than 
waiting on other fibers), the best design would be a limited size 
threadpool that can run any of the per-client fibers.  And AFAIK your 
proposed library has no support for this scenario.
I'm not saying it necessarily *needs* this, but if you're going to talk 
about fibers-as-useful-to-ASIO I think this case is going to come up 
sooner rather than later, so it may be worthy of consideration in the 
library design.
Another scenario that doesn't require fiber migration, but does require 
cross-thread-fiber-synch, is:
   - one thread running N client fibers
   - M worker threads each running one fiber
If a client thread wants to make a blocking call (eg. database/file I/O) 
it could post a request to a worker, which would do the blocking call 
and then post back once it was done.  This would allow the client fibers 
to keep running but the system would still bottleneck once it had M 
simultaneous blocking calls.  (A thread per client system wouldn't 
bottleneck there, but it loses performance if there are too many 
non-blocked threads.)
Neither design seems entirely satisfactory.  (An obvious solution is to 
never use blocking calls, but that's not always possible.)