I am wondering if there is a way of calculating the following sum of unequal sized chunks of a tensor in pytorch
import torch
import numpy as np
x = torch.rand(1000,100)
y = np.unique(np.random.choice(1000,10)
here I have a tensor x of size (1000,10), I want to calculated the sum of chucks along the first axis. These chunks are split along the first axis and y indicate the end line of each chunk. They are in general of unequal size. For example, I can do this with the following for loop
cum_pos_lik = torch.FloatTensor(y.size, 100)
y = np.append(0, y)
for i in range(1, y.size):
cum_pos_lik[i-1, :] = x[y[i-1]:y[i], :].sum(0)
But I need this to be faster for my application. Clearly the sum of each chunk can be parallelized. I am wondering if there is a simple way in pytorch of doing it. It would also be nice if a way of vectorizing it in numpy is provided.
Thank you!