how to efficiently make a mini-batch of images in pytorch?

3.3k Views Asked by At

I am trying to calculate a forward pass using pre-trained ResNet model in pytorch. I am having trouble creating a 4-d Tensor of mini-batches. Can someone please tell what is the proper way to do that?

EDIT: I changed the code and it works now. However, I still think there should a more efficient way of doing this.

Here's my code:

import pickle
import json
import shutil
import Image
import torchvision.models as models
import torchvision.transformers as transformers
from torch.autograd import Variable
from torch import Tensor
import glob
import torch

batch_size = 128
im_size = 299

normalize = transforms.Normalize(
   mean=[0.485, 0.456, 0.406],
   std=[0.229, 0.224, 0.225]
)
preprocess = transforms.Compose([
   transforms.Scale(im_size),
   transforms.CenterCrop(im_size),
   transforms.ToTensor(),
   normalize
])


model = models.resnet50(pretrained=True)

d_batch = make_batch(imgs, batch_size)

dtype = torch.FloatTensor
tmp = Variable(torch.randn(batch_size, 3, im_size, im_size).type(dtype), requires_grad=False)


for batch in tqdm(batches):
        try:
                data = [Image.open(img) for img in batch]
                for idx, item in enumerate(data):
                        tmp[idx] = preprocess(item)
                batch_result = model(tmp)
        except Exception,x:
                print x
1

There are 1 best solutions below

0
On

Using dataset = torchvision.datasets.ImageFolder(...) you can load a dataset from image folder. After that you can use torch.utils.data.DataLoader(dataset, batch_size=batchSize) to specify mini-batch size and other things for further processing.