std::launder
intentionally obfuscates the origin of a pointer for the abstract machine / the compiler so that source and result may have different lifetimes and types. When used for e.g. (static) vector situations where you have a semi-large storage holding a number of objects, laundering a "dry" pointer to a slice produces a clean but "wet" pointer (wet as in laundered) of the appropriate type:
// appropriately sized and aligned data member
std::byte* const dry = data + some_index;
T* const wet = std::launder(reinterpret_cast<T*>(dry));
The idea is that the compiler cannot "see" past std::launder
. Even if no implementation lives in a separate compilation unit. The question is if this may still cost legitimate optimisation opportunities.
For example:
T object;
// ...
T* ptr = &object;
ptr->member = 42;
return ptr->member;
Presumably, the address of object
and member
does not need to be retrieved separately when reading its contents because it likely still sits in some register. But if access to ptr
was always laundered, for example if it was retrieved from some operator[](size_type)
that has to launder the storage pointer, this might no longer be true:
storage[i].member = 42;
return storage[i].member;
// storage[]() launders
- Is my understanding of optimisation fences correct?
- Does this mean re-loading of (sub-) objects on the assembly level has to occur because of launder?
- If not, why not?
- Is there a way around this other than keeping a reference to
storage[i]
? - Or should I rather think of
std::launder
as a function that does nothing but magically makes UB-code non-UB?
Implications: If this is correct, it would mean that vector-like structures (i.e. what you use when you need the most performance) would take a performance hit between C++14 and C++17 because of the introduction of std::launder
. std::vector<T>
would fall behind a simple T*
.