Is bit_cast
ing arrays of one type to another needed to avoid UB?
For example, I have a function
void func(std::vector<int32_t>& dest, std::vector<std::byte>& src, const long stride){
const auto ptr = reinterpret_cast<std::byte *>(dest.data());
for (std::size_t i = 0; i < src.size(); ++i) {
const auto t = ptr + 4 * i;
t[0] = src[i];
t[1] = src[i + stride];
t[2] = src[i + 2 * stride];
t[3] = src[i + 3 * stride];
}
}
Do I need to use bit_cast
instead?
void func2(std::vector<int32_t>& dest, std::vector<std::byte>& src, const long stride){
for (std::size_t i = 0; i < src.size(); ++i) {
alignas(std::int32_t) std::array<std::byte, 4> t;
t[0] = src[i];
t[1] = src[i + stride];
t[2] = src[i + 2 * stride];
t[3] = src[i + 3 * stride];
dest[i] = std::bit_cast<std::int32_t>(t);
}
}
Or a use memcpy
?
void func3(std::vector<int32_t>& dest, std::vector<std::byte>& src, const long stride){
for (std::size_t i = 0; i < src.size(); ++i) {
alignas(std::int32_t) std::byte t[4];
t[0] = src[i];
t[1] = src[i + stride];
t[2] = src[i + 2 * stride];
t[3] = src[i + 3 * stride];
std::memcpy(&dest[i], t, sizeof t);
}
}
From my tests, the bit_cast
and memcpy
seems to have some overhead and the generated asm code is different which we wound expect to be the same for scalar types
https://godbolt.org/z/Y1W585EWY
I don't know if its UB in there but if you can use unsigned version, you can convert this part:
to this:
If you need speedup, you can vectorize the operation:
then join them the same way but in vectorized form.
Maybe you don't even need explicit use of intrinsics. Just try with simple loops working on arrays (vector) of simd-width number of elements but generally encapsulation adds code bloat so you may need to do it on bare plain arrays on stack.
Some compilers have a default minimum number of loop iterations to vectorize so you should test it with different tile width or apply a compiler flag that lets it vectorize even small loops.
Here is a sample solution from a toy auto-vectorized SIMD library: https://godbolt.org/z/qMWsbsrG8
Output:
this is with all the stack-array allocation + kernel-launch overheads.
For 10000 operations (https://godbolt.org/z/Mz1K75Kj1) it takes 2.4 nanosecond per operation.
Here is 1000 operations but by using only 16 work-items (single ZMM register on AVX512): https://godbolt.org/z/r9GTfffG8
this is 1.25 nanoseconds per operation (at least on godbolt.org server). For FX8150 and a narrower simd value, it has ~6.9 nanoseconds per operation. If you write a non-encapsulation version, it should create less code bloat as a result and be faster.
Lastly, use multiple iterations for benchmarking: https://godbolt.org/z/dn9vj9seP
this is 0.17 nanoseconds per operation. 23GB/s is not too bad for a client-shared server RAM + simple loop. Explicitly using AVX intrinsics and no encapsulation should get you maximum bandwidth of the L1/L2/L3 caches or RAM (depends on dataset size). But beware, if you do this on a real work server with client-sharing, then your neighbors will feel the turbo-underclock during the AVX512-accelerated computations (unless integer doesn't count as a heavy-load for AVX512 pipelines).
Compilers will behave differently. For example, clang produces this:
while gcc produces much more code bloat. I don't know why.
(you also have overflow in src vector by iterating to last element and adding stride)