In Cormen's own words - "The difference is that with the deterministic algorithm, a particular input can elicit that worst-case behavior. With the randomized algorithm, however, no input can always elicit the worst-case behavior."
How does adding a randomized pivot change anything, the algorithm is still gonna perform bad under some particular input and considering each kind of input equally likely this is no better than that of the standard quicksort, only difference being we don't actually know which particular input is going to cause the worst case time complexity. So why is the randomized version considered better?
Consider the following version of quicksort, where we always pick the last element as the pivot. Now consider the following array:
When this array is sorted using our version of quicksort, it will always pick the smallest element as its pivot, the last element. And in the first iteration, it will change the array like this:
Now, it will recurse on the sub-arrays:
In
s1it will again pick 2 as its pivot, and only 8 and 2 will interchange positions. So, in this way, if we try to formulate a recurrence relation, for its complexity, it will bewhich corresponds to O(n^2).
So, for this array, the standard version will always take O(n^2) time.
In the randomized version, we first exchange the last element with some random element in the array and then select it as the pivot. So, for the given array, this pivot will split the array randomly, most probably in the middle. So, now the recurrence will be
which will be O(n * Log(n)).
That's why we consider randomized quicksort better than standard quicksort, because, there is very low probability of bad splits in randomized quicksort.