$include_dir="/home/hyper-archives/boost/include"; include("$include_dir/msg-header.inc") ?>
Subject: Re: [boost] [next gen future-promise] What to call the monadic return type?
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2015-05-26 21:41:30
On 26 May 2015 at 19:29, Hartmut Kaiser wrote:
> Optimizing away one allocation to create a promise/future pair (or just a
> future for make_ready_future) will have no measurable impact in the context
> of any I/O, be it wait free or asynchronous or both. 
No one here is claiming better futures make any effect on i/o 
performance except you. You are reading only the parts of the thread 
you want to in order to believe what you already believe (apparently 
that I have some master plan of "taking over" Boost). You drew the 
link here between i/o and futures, none of us claimed it. The thread 
earlier was clearly about two entirely separate topics. You conflated 
them to make your own personal point.
> In general, all I'm hearing on this thread is 'it could be helpful', 'it
> should be faster', 'it can be important', or 'makes a big difference', etc.
> I was hoping that we as a Boost community can do better! 
Constant cherry picking of thread topics just to nay say and put down 
any discussion of alternative idioms and designs isn't being positive 
nor helpful.  
> Nobody so far has shown the impact of this optimization technique on a real
> world applications (measurements). Or at least, measurement results from
> artificial benchmarks under heavy concurrency conditions (using decent
> multi-threaded allocators like jemalloc or tcmalloc). I'd venture to say
> that there will be no measurable speedup (unless proven otherwise). 
Again nobody claimed that. I was quite clear I primarily want single 
op code reduction as part of unit testing. That's its main purpose 
for me as a per-commit CI test that I am writing perfectly optimal 
code, not just mostly optimal code. A happy consequence is a 
potential runtime cost optimal monadic transport, and that's what I 
came here to bikeshed a name for, and see if there is interest in 
such a development. Feedback on both has been both positive, and 
useful, so I will proceed.
I have also been very clear that this new design solves my major 
problems, not *the* major problems with existing futures. That's what 
I designed it to do. I believe it also solves the same problems as 
face futures in ASIO. Once finished and deployed, if others find it 
solves their problems too then it has a great chance on becoming a 
next gen Boost future. If it doesn't, then it won't.
I have been working on this replacement future design since October, 
with multiple presentations of my code experiments here to gain 
feedback. Others have presented their code experiments here too. We 
have all reviewed each other's design ideas and code, and evolved our 
own designs and code in response. If this exchange of code 
experiments between people all with similar problems with existing 
futures isn't what Boost is exactly all about, then I don't know what 
negative and cynical vision of Boost you have. Multiple people here 
have problems with futures, and multiple people are experimenting 
with improvements. This is something that should be welcomed, not 
constantly put down with negativity.
I would expect you'll see me present benchmarks here in due course 
once the implementation is drop in replaceable. I am expecting about 
a 5% performance improvement in AFIO as a drop in, and a 20% 
improvement once I replace AFIO's continuations infrastructure with 
.then() and remove the central spinlocked unordered_map. This should 
help further close the gap between AFIO and ASIO which is currently 
between 15% and 32%. That gain is what I am developing these futures 
for after all - to solve my problems, and maybe as a happy 
consequence solve other people's problems too.
Niall
-- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/