This is kind of a specific problem. I have recently tested out gpu.js. This library is supposed to accelerate computations by using webgl to parallelize computations. I made a quick test:
var gpu = new GPU();
function product(v, u) {
return gpu.createKernel(function(X, Y) {
return X[this.thread.x] * Y[this.thread.x];
}).dimensions([v.length])(v, u);
}
var before = new Date().getTime();
console.log(product(numeric.random([100000]), numeric.random([100000])).length);
console.log('Parallel Time: ', (new Date().getTime()) - before);
before = new Date().getTime();
v = numeric.random([100000])
u = numeric.random([100000])
for(var i = 0; i < v.length; i++){
v[i] = v[i] * u[i];
}
console.log(v.length);
console.log('Procedural Time: ', (new Date().getTime()) - before);
And got the following output:
script.js:11 100000
script.js:12 Parallel Time: 340
script.js:20 100000
script.js:21 Procedural Time: 15
The parallel time is over an order of magnitude slower. Is there any reason why this would be the case? I tried this on a few machines with different GPUs. I have tried a few similar operations as well. Am I doing something wrong or is it a problem with the library? Is there some way I can improve this?
Use:
Not sure if that's the core of the issue, but I've been having problems with Date too.