The Fenwick tree data structure requires a member function read. For a index idx, read must compute several indices idx[0], idx[1], idx[2], ..., idx[last]. Each idx[i] will be the value that results when you set the i least significant non-zero bits of idx to 0.
For example, if idx = 13 = ...00001101, then idx[1] = 13, idx[1] = 12 = ...00001100, and idx[2] = 8 = ...00001000.
If we assume, as most online sources on Fenwick trees do, that signed integers are represented using two's complement, idx[i] is easy to compute. In that case, idx[i+1] = idx[i] - (idx[i] & -idx[i]);. Justification for this line can be found here: Fenwick trees.
Prior to C++20, using bitwise & was implementation defined, but it appears C++20 now requires the use of two's complement. That leads to the following questions:
- Is it correct to say that, prior to C++20,
idx[i+1] = idx[i] - (idx[i] & -idx[i]);is "incorrect" code? - With C++20, is
idx[i+1] = idx[i] - (idx[i] & -idx[i]);now correct? - If the answer to (2) is "no", what is an efficient way to compute
idx[]? Ifidxis unsigned, can I just doidx[i+1] = idx[i] - (((~idx[i]) + 1) & idx[i])?
In fact if
idxis unsigned you can just doidx[i+1] = idx[i] - (idx[i] & -idx[i]);, so you can get the best of both worlds.Contrary to popular myth, negating unsigned integers is well-defined, and moreover it does exactly what you need it to do here.
This has always been safe, and it's always convenient. There is no need to rewrite the negation.
From the C++ specification: