Correct balance between num of iterations and num of forks in JMH

436 Views Asked by At

I am exploring OpenJDK JMH for benchmarking my code. As per my understanding JMH by default forks multiple JVM in order to defend the test from previously collected “profiles”. Which is explained very well in this sample code.

However my question is that what impact I will have on result if I will execute using following two approaches:

1) with 1 fork , 100 iterations 2) with 10 fork, 10 iterations each

And which approach will give more accurate result?

2

There are 2 best solutions below

0
On BEST ANSWER

It depends. Multiple forks are needed to estimate run-to-run variance, see JMHSample_13_RunTo_Run. Therefore, a single fork is definitely worse. Then, if you ask what is better: 10x100 run or 100x10 run, this again depends on what is the worse concern -- run-to-run variance, or in-run variance.

0
On

It depends on how much the results vary per fork vs. per iteration, which is workload specific.

If you want a rigorous statistical approach to figuring out this tradeoff, check out "Rigorous Benchmarking in Reasonable Time" (Kalibera, Jones). Equation 3 gives the optimal counts per level (in your case, these would be number of forks to run and number of iterations per fork) by using the observed variances between forks and between iterations.