Changing TensorFlow operation device placement during runtime?

331 Views Asked by At

As far as I can see, TensorFlow is designed to have fully static device placement across a single tf.Session.run(). Is there a known ideal location to insert code for on-the-fly changing of operation device placement?

I'm aware of the static methods at a python level, but I'm looking for something at a C++ level such that I can do something akin to load balancing.

As an example, let's say I want TensorFlow to schedule operations to CPU and GPU in an alternating fashion (hardly ideal, I know). How might I do this at runtime so as operation dependencies are resolved and more operations are scheduled the environment of an operation is updated to be a different device? Would this best be done using the DeviceMgr to change execution device for the environment of a given operation in ExecutorState::Process(TaggedNode tagged_node, int64 scheduled_usec) right before the operation is launched (line 1651 of executor.cc)? Or am I misunderstanding when an operation is scheduled for execution through XLA and when is the latest time I can change the device placement?

0

There are 0 best solutions below