If the data that continues to be connected to each other enters the model,
How can data sets be configured when performing Tensorflow GPU parallel processing to ensure continuity of data sets entering a particular gpu?
We've transformed the dataset itself in a variety of ways,
If I use the [tf.distribute.Mirrored Strategy] method, it didn't work because there was no definition that the data was assigned to a particular gpu.
In other words, it doesn't seem to guarantee continuity of data