How to simulate an order queue in repast across multiple ranks(processor cores )?

86 Views Asked by At

I am just beginning with repast python. I want to simulate a small order handling process with multiple steps.

Let's say there are 1000 orders with different order placement timestamps. There are 3 steps after the order is received, picking(10 - 15 mins), packing(8 - 12mins), shipping(5 - 10 mins). Each step has dedicated number of workers lets say 10 for picking, 5 for packing and 2 for shipping.

All the workers are independent and can work parallelly. Once a worker is done with the assigned activity for an order, he can move on the next order to process it.

How can a create a queue variable that is accessible to all the processors in repast python.

I cant find any logistics based examples of repast python. I am trying to explore repast libraries like Simpy but they are not scalable for large problems.

In the Random Walk example in repast4py documentation, we run the program using

mpirun -n 4 python rndwalk.py random_walk.yaml 

This will run the program on multiple ranks but they all share a SharedGrid to interact. Is there something similar for creating shared queues for each step of the process like an order queue, picking queue, packing queue etc...than can accessed by all workers?

1

There are 1 best solutions below

2
Nick Collier On

Without knowing more of the details, I think you'll need to select a particular rank (e.g., rank 0) to manage the queues and synchronize them across processes. Rank 0 could create a queue for each rank from the full queue and use mpi4py to share those with itself and the other ranks. At some appropriate interval the full queue could be updated from the rank queues and new rank queues created. See the mpi4py documentation for how send and recv Python objects between ranks. For example,

https://mpi4py.readthedocs.io/en/stable/tutorial.html#collective-communication

Broadcast, scatter, gather etc., are MPI collection communication concepts. This is a good introduction to them: https://mpitutorial.com/tutorials/mpi-broadcast-and-collective-communication/, although the examples are in C.

Lastly, repast4py runs just fine on a single process (mpirun -n 1) in which case there's no need to share queues. So, if you simulation runs fast enough on a single process then you'd avoid the issue entirely.