Stuck in Dimension problem in building QNN with pytorch

44 Views Asked by At

I'm attempting to build a Quantum Neural Network (QNN) using PyTorch and PennyLane. However, I'm encountering a dimension error specifically when defining the quantum layer.

I have successfully set up my PyTorch and PennyLane environment, but when I try to define the quantum layer using PennyLane, I receive a dimension error. I suspect this might be due to a mismatch in the dimensions of my input data and the expected input shape of the quantum layer.

My Code:

# Get data 
train = datasets.MNIST(root="data", download=True, train=True, transform=ToTensor())
dataset = DataLoader(train, 32)
n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(inputs, weights_0, weight_1):
    print(inputs)
    qml.RX(inputs[0], wires=0)
    qml.RX(inputs[1], wires=1)
    qml.Rot(*weights_0, wires=0)
    qml.RY(weight_1, wires=1)
    qml.CNOT(wires=[0, 1])
    return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))
weight_shapes = {"weights_0": 3, "weight_1": 1}
qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
print(qlayer)
class ImageClassifier(nn.Module):
    def __init__(self):
        super().__init__()
        self.model = nn.Sequential(qlayer,
            nn.Conv2d(1, 32, (3, 3)),
            nn.ReLU(),
            nn.Conv2d(32, 64, (3, 3)),
            nn.ReLU(),
            nn.Conv2d(64, 64, (3, 3)),
            nn.ReLU(),
            nn.Flatten(),
            nn.Linear(64 * (28 - 6) * (28 - 6), 10)
        )

    def forward(self, x):
        result = self.model(x)
        return result
# Instance of the neural network, loss, optimizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Instance of the neural network, loss, optimizer
clf = ImageClassifier().to('cpu')
opt = Adam(clf.parameters(), lr=1e-3)
loss_fn = nn.CrossEntropyLoss()

# Training flow 
if __name__ == "__main__":
    for epoch in range(1):  # train for 10 epochs
        for batch in dataset:
            X, y = batch
            X, y = X.to('cpu'), y.to(device)
            yhat = clf(X)
            loss = loss_fn(yhat, y)

            # Apply backprop 
            opt.zero_grad()
            loss.backward()
            opt.step()

        print(f"Epoch:{epoch} loss is {loss.item()}")

The Error I found:

RuntimeError                              Traceback (most recent call last)
<ipython-input-84-a98a57a9f607> in <cell line: 9>()
     12             X, y = batch
     13             X, y = X.to('cpu'), y.to(device)
---> 14             yhat = clf(X)
     15             loss = loss_fn(yhat, y)
     16 

10 frames
/usr/local/lib/python3.10/dist-packages/pennylane/qnn/torch.py in <listcomp>(.0)
    427 
    428         if len(x.shape) > 1:
--> 429             res = [torch.reshape(r, (x.shape[0], -1)) for r in res]
    430 
    431         return torch.hstack(res).type(x.dtype)

RuntimeError: shape '[896, -1]' is invalid for input of size 28
1

There are 1 best solutions below

0
varrix On

You seem to be trying to pass the output from a CNN layer directly into the quantum layer without ensuring the dimensions are compatible.

Try adjusting the dimensions of the input tensor before passing it to the quantum layer. Maybe, add a layer before qlayer in your ImageClassifier which flattens or somehow processes the output of the previous layers to match the expected input size of the quantum layer.