This is my code: Godbolt.
#include <atomic>
#include <iostream>
#include <thread>
#include <vector>
int main(int, char **) {
volatile bool resource = true;
std::atomic_bool stop{false};
std::atomic_int working{0};
std::thread controller([&]() {
// Inform all workers resource is going to be destroyed.
stop.store(true, std::memory_order_relaxed); // #1
// Wait for all existing workers to finish.
while (working.load(std::memory_order_relaxed)) { // #2
std::this_thread::yield();
}
// Destroy resource.
resource = false; // #3
});
std::vector<std::thread> workers;
for (int i = 0; i < 64; ++i) {
workers.emplace_back([&]() {
working.fetch_add(1, std::memory_order_relaxed); // #4
if (stop.load(std::memory_order_relaxed)) { // #5
working.fetch_sub(1, std::memory_order_relaxed); // #6
return;
}
// Access resource
if (!resource) { // #7
std::cerr << "Data race detected: resource is not available."
<< std::endl;
abort();
}
working.fetch_sub(1, std::memory_order_relaxed); // #8
});
}
for (auto &worker : workers) worker.join();
controller.join();
std::cout << "no data race detected." << std::endl;
return 0;
}
The case is like this:
- Multiple worker threads accessing a common resource(read-only).
- One controller thread will destroy the common resource after informing the workers and wait for the existing workers to finish.
This case describes a common scene: Some workers born at any time, but a controller(resource manager) might intend to destory the resource at any time. To avoid data race, the controller need to wait a bit while for current workers to finish and prevent any new workers.
I used several c++ atomics to get it work. But i have no confidence about the memory ordering although it seems to work well on my x86 server.
- In the above code, i just use the relaxed ordering which is apparently not enough, but i don't know which ordering is right here.
- By the way, are there any websites or tools to test the memory reordering issues among different platforms?
Briefly: #1, #2, #4, #5 need to be
seq_cst
. This is the only ordering which prevents StoreLoad reordering (moving a load before a store).We can see that a data race would potentially occur if #2 were reordered before #1. In that case, it could happen that
working.load()
returns 0 (all existing workers have finished), but then another worker starts and immediately checks the stop flag (#5), getting the valuefalse
. Then it will proceed to access the resource, racing with the controller thread which is now about to destroy it.Likewise, reordering #4 and #5 also results in a potential data race. Then the worker could see the stop flag as false, but not yet have incremented
working
to indicate that it has started. If the controller runs at this point, it would proceed to destroy the resource.Also, #8 needs to be
release
, since it is essential that it not be reordered before #7. We would similarly argue that #2 needs to beacquire
, but in fact it already has to beseq_cst
as shown above, which includes all the properties ofacquire
.#6 can stay as
relaxed
. Any thread which reaches #6 is not going to access the resource at all, and so it cannot be involved in a data race.If I get some time later, I will add a formal proof that these orderings would eliminate the data race.