I just read:
and noticed it's kind of old and most of the answers regard pre-2011 C++. These days we have syntactic lambdas, which can even deduce the return type, so lazy evaluation seems to boil down to just passing them around: Instead of
auto x = foo();
you execute
auto unevaluted_x = []() { return foo(); };
and then evaluate when/where you need to:
auto x = unevaluted_x();
Seems like there's nothing more to it. However, one of the answers there suggests using futures with asynchronous launching. Can someone lay out why/if futures are significant for lazy-evaluation work, in C++ or more abstractly? It seems as though futures may very well be evaluated eagerly, but simply, say, on another thread, and perhaps with less priority than whatever created them; and anyway, it should be implementation-dependent, right?
Also, are there other modern C++ constructs which are useful to keep in mind in the context of lazy evaluation?
In a multithreaded application where simultaneous requests are made for data that takes a lot of effort to prepare, thread-safe memoization can be used so that users not only avoid redoing work that's already been performed but avoid starting their own version of work that's already in progress.
Using a future or futures to deliver the data is the easy part: C++ already has that implemented. The tricks are (a) find a way to ensure that multiple threads asking for the same data create objects that will be treated as equivalent (or identical) keys in a map... this would be up to the user... and (b) using a concurrent map with that key and with a future as data. At most one simultaneously executed insert, emplace or try_emplace attempted with the same key would be able to insert a key-value pair and all of them would return an iterator to the same key-value pair (which could have been in the map already for some time). Using an std::unordered_map with a mutex would work, but it doesn't scale very well. Java already has concurrent maps with excellent performance in these situations: C++ needs the same, preferably in the Standard Library, as soon as possible.