For a program that is 70% parallelizable, what will be the speedup relative to a uniprocessor from running it on a 4,8 and 16 way multiprocessor assuming perfect load balancing.
So I am having a hard time solving this problem, I know the equation is:
Improved execution time = affected execution time/(improvement factor + unaffected execution time)
But I am not sure what I should be plugging and why I should or how I should rewrite the equation to solve the problem
Thanks!
Maybe a picture helps:
So if the sequential time is T and the fraction Q is parallelizable, then the total time it takes when parallelized to n cores is (1 − Q) * T + Q * T / n.
(The Wikipedia article uses Q = 1 − B).
Interesting cases are Q = 0, when there is no speed-up at all and the algorithm takes time T on any number of cores, and Q = 1 when the algorithm is perfectly parallel and takes time T / n on n cores.