I have had success using multithreading in Julia for simulating a graph with evolving values and would like to implement a similar scheme in Python.
Because the value at a node at time t may depend on the value at any other given node at time t-1, the threads must be in agreement at the end of every time step, but can update elements independently within a time step given shared information from the previous time step. I am not sure what Julia is doing under the hood, but multithreading is very effective for this purpose. I have implemented a hacky method to achieve the same thing in Python (below), but it is unsurprisingly much slower to repeatedly create and destroy threads at ever time step, and reusing threads does not seem to be allowed. Before I dive in to learning the nitty-gritty of multithreading in Python, perhaps someone could advise on viability of the course. Does it make is sense to use Python's multiprocessing (or another library) to parallelize graph updates in this way?
import multiprocessing
T = 1000
for t in range(T):
thrds = []
for n,node in enumerate(net.nodes):
thrds.append(
multiprocessing.Process(
target=update_node,
args=(n,node,t)
)
)
for thrd in thrds:
thrd.start()
for thrd in thrds:
thrd.join()
del(thrds)
The simplest way to achieve that would be to use a
multiprocessing.Barrierobject to keep all of the threads in lock-step with each other.You specify a number of threads,
Nwhen you construct aBarrierobject, and then the threads can callbarrier.wait(). The firstN-1callers will be blocked until theNth thread callswait(). Then, all of thewait()calls return, and the barrier is automatically reset, and ready to be used again.Try running this to see what I mean:
It works equally well with
ctx=mp.get_context('fork'), so you should be able to use it on pretty much any host OS.