I want to run this permutations function on GPU.
function* permute(permutation) {
var length = permutation.length;
var c = Array(length).fill(0);
var i = 1;
var k;
var p;
yield permutation.slice();
while (i < length) {
if (c[i] < i) {
k = i % 2 && c[i];
p = permutation[i];
permutation[i] = permutation[k];
permutation[k] = p;
++c[i];
i = 1;
yield permutation.slice();
} else {
c[i] = 0;
++i;
}
}
}
To run this on GPU I try to use https://gpu.rocks/ but I do not understand their example of how to setup the threads. How should I write this permutations function so I can run it in the browser on GPU?
The GPU.js supports only a few operations, because it is translate to shaders. In the tests here you cannot use it to produce outputs longer than 100k elements.
The operations supported are very limited, you cannot assign to array elements for instance, and this is due to the nature of the underlying computation unit. Also you have to be aware of branch divergence.
Two steps were required to implement your function in a way that it could be used in a kernel.
J
can produce a permutation.I
that element is returned. Notice that this function must be called once for each element in each permutation.