Where do I put performance.now() calls?

281 Views Asked by At

I have written a small script to compare function's implementations by execution time. It runs a functon, for example, 1e6 times and sums the execution time for each call. I got puzzled as to where exactly I should place the performance.now() calls. The first alternative is to put them inside a for loop right before and after the function call and then just sum up the results after each iteration. The second option is to put the performance.now() calls right before and after the cycle itself. The first scenario feels to be more accurate but on my machine it gives the final result that is ~30% greater than the second alternative stably. Could someone explain to me why? what is the correct place for performance.now()?

The first option - performance.now() calls inside the cycle. It gives around 1300 ms for computing the factorial of 150 1e6 times on my machine:

function factorial(x) {
    return x === 1 ? 1 : x * factorial(x - 1); 
}

function measure(func, arg, times) {
  let timeSpent = 0;
  for(var i = times;i--;) {
      let t1 = performance.now();   
      func(arg);
      let t2 = performance.now();
      timeSpent += t2 - t1;
  }
  console.log(`Tested function ${func.name} has been executed ${times} times with argument ${arg} and it has taken %d ms. in total.`, timeSpent);
}

measure(factorial, 150, 1e6);

The second option - performance.now() calls outside the cycle. It gives around 1000 ms for computing the factorial of 150 1e6 times on my machine:

    function factorial(x) {
        return x === 1 ? 1 : x * factorial(x - 1); 
    }

    function measure(func, arg, times) {
      let t1 = performance.now();
      for(var i = times;i--;) func(arg);
      let t2 = performance.now();
      console.log(`Tested function ${func.name} has been executed ${times} times with argument ${arg} and it has taken %d ms. in total.`, t2 - t1);
    }

    measure(factorial, 150, 1e6);

1

There are 1 best solutions below

0
On

When I do performance measurements, I consider a few things:

  • what is the accuracy of the time service I am using, and is it returning 'wall clock time' or 'cpu time'

  • what degree of accuracy do I need to have from the measurement (1%, .1%, 10%?)

  • how much time do I want to spend doing the performance run?

  • do I need 'warm up time' before I do the measurement (for example, if I am measuring how long a transaction takes to run, and I am using a JITted language like Java or javascript, then I may want to throw away the timings for the first N results. On the other hand, if I am measuring a 'batch' program - something that just runs once - then I don't need to do that).

Typically, time services have somewhere between hundreds of microseconds to a handful of milliseconds accuracy. So - you want to ensure that you are doing enough loops around whatever you are measuring so that you aren't worrying about your time function causing you grief. So, if I had a timer that was accurate to (say) 100us, then I would ensure that I ran enough loops to take at least 1000us (1ms) and probably 10ms.

There are also lots of things that cause jitter when doing performance measurements. jitter is the inevitable variability you get in practice when doing performance measurements. So - I would also repeat your performance runs several times and look at the mean, std deviation of the runs to see how consistent your results are (this can also help illuminate things like JIT compilers since you would get some really drastic outliers when you looked at your results).

To actually answer your question, I would go with your second option. You want to ensure you have enough work being done so that the accuracy of your time service isn't an issue.