Pytorch sending inputs/targets to device

3k Views Asked by At

Currently using the pycocotools 2.0 library.

My train_loader is:

train_loader, test_loader = get_train_test_loader('dataset', batch_size=16, num_workers=4)

However the training lines of code:

        for i, data in enumerate(train_loader, 0):
            images, targets = data
            images = images.to(device)
            targets = targets.to(device)

Results in error. Variables data, images, and targets are all of class tuple

Traceback (most recent call last):
  File "train.py", line 40, in <module>
    images = images.to(device)
AttributeError: 'tuple' object has no attribute 'to'

How can I properly send these to cuda device?'

Edit:

I can send images[0].to(device) no problem. How can I send the rest?

1

There are 1 best solutions below

1
On BEST ANSWER

You should open the for loop with as many items as your dataset returns at each iteration. Here is an example to illustrate my point:

Consider the following dataset:

class CustomDataset:
    def __getitem__(self, index):
        ...
        return a, b, c

Notice that it returns 3 items at each iteration.

Now let us make a dataloader out of this:

from torch.utils.data import DataLoader
train_dataset = CustomDataset()
train_loader = DataLoader(train_dataset, batch_size=50, shuffle=True)

Now when we use train_loader we should open the for loop with 3 items:

for i, (a_tensor, b_tensor, c_tensor) in enumerate(train_loader):
   ...

Inside the for loop's context, a_tensor, b_tensor, c_tensor will be tensors with 1st dimension 50 (batch_size).

So, based on the example that you have given, it seems that whatever dataset class your get_train_test_loader function is using has some problem. It is always better to seprately instantiate the dataset and then create the dataloader rather than use a blanket function like the one you have.