Here is a question available but the answer is not relevant.
This code will transfer the model to multiple GPUs but how to transfer data on GPU's?
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model, device_ids=[0, 1])
My question is what is the replacement of
X_batch, y_batch = X_batch.to(device), y_batch.to(device)
What should be device equal to in the DataParallel case
You don't need to transfer your data manually!
The
nn.DataParallelwrapper will do that for you since its purpose is to distribute the data equally on the different devices provided on initialization.In the following snippet, I have a straightforward setup showing how a data-parallel wrapper initialized with 'cuda:0' transfers the provided CPU input to the desired device (i.e. 'cuda:0') and returns the output on the same device: