I'm new to PyTorch, and want to find the accuracy of each epoch. I know that accuracy is # of correct predictions / the total samples, but I don't know how to integrate this into my code.:
for epoch in range(1):
epoch_losses = []
model.train()
for step, (inputs, labels) in enumerate(train_dataloader):
optimizer.zero_grad()
inputs, labels = inputs.to(device), labels.to(device)
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
epoch_losses.append(loss.item())
if epoch % 1 == 0: # For every epoch
print(f">>> Epoch {epoch+1} train loss: ", np.mean(epoch_losses))
epoch_losses = []
model.eval()
epoch_losses = []
for step, (inputs, labels) in enumerate(test_dataloader):
# Unpack the tuple
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
loss = criterion(outputs, labels)
epoch_losses.append(loss.item())
print(f">>> Epoch {epoch+1} test loss: ", np.mean(epoch_losses))
To calculate the accuracy of your Vision Transformer model, you need to keep track of the number of correct predictions during both training and testing epochs. Then, you can divide the total number of correct predictions by the total number of samples to get the accuracy.
Here's how you can modify your code to calculate accuracy during each epoch:
In this code:
correct_predictions
keeps track of the number of correct predictions.total_samples
keeps track of the total number of samples.torch.max(outputs, 1)
finds the index of the maximum value along the predicted axis.torch.sum(predicted == labels).item()
calculates the number of correct predictions in a batch.Finally, you compute accuracy by dividing the total number of correct predictions by the total number of samples.