This is kind of a specific problem. I have recently tested out gpu.js. This library is supposed to accelerate computations by using webgl to parallelize computations. I made a quick test:
var gpu = new GPU();
function product(v, u) {
return gpu.createKernel(function(X, Y) {
return X[this.thread.x] * Y[this.thread.x];
}).dimensions([v.length])(v, u);
}
var before = new Date().getTime();
console.log(product(numeric.random([100000]), numeric.random([100000])).length);
console.log('Parallel Time: ', (new Date().getTime()) - before);
before = new Date().getTime();
v = numeric.random([100000])
u = numeric.random([100000])
for(var i = 0; i < v.length; i++){
v[i] = v[i] * u[i];
}
console.log(v.length);
console.log('Procedural Time: ', (new Date().getTime()) - before);
And got the following output:
script.js:11 100000
script.js:12 Parallel Time: 340
script.js:20 100000
script.js:21 Procedural Time: 15
The parallel time is over an order of magnitude slower. Is there any reason why this would be the case? I tried this on a few machines with different GPUs. I have tried a few similar operations as well. Am I doing something wrong or is it a problem with the library? Is there some way I can improve this?
When dealing with the GPU you have to be aware of overhead.
Calls to
gpu.createKernel
are likely to be very expensive as it has to parse your JavaScript code, create the appropriate GLSL code, and send it to WebGL to be compiled and linked.At the very least you'll want to call that command once and store the result in a global variable to be reused every time you call
product
.It's also worth being aware that there is a non zero amount of work required to move the data to and from the GPU, so you'll see more gains with more complicated calculations.