Difficulty in comprehending asymptotic notation

297 Views Asked by At

As far as I know and research,

Big - Oh notation describes the worst case of an algorithm time complexity.

Big - Omega notation describes the best case of an algorithm time complexity.

Big - Theta notation describes the average case of an algorithm time complexity.

Source

However, in recent days, I've seen a discussion in which some guys tell that

O for worst case is a "forgivable" misconception. Θ for best is plain wrong. Ω for best is also forgivable. But they remain misconceptions.

The big-O notation can represent any complexity. In fact it can describe the asymptotic behavior of any function.

I remember as well I learnt the former one in my university course. I'm still a student and if I know them wrongly, please could you explain me?

2

There are 2 best solutions below

0
Jörg W Mittag On

Bachmann-Landau notation has absolutely nothing whatsoever to do with computational complexity of algorithms. This should already be very obvious by the fact that the idea of a computing machine that computes an algorithm didn't really exist in 1894, when Bachmann and Landau invented this notation.

Bachmann-Landau notation describes the growth rates of functions by grouping them together in sets of functions that grow at roughly the same rate. Bachmann-Landau notation does not say anything about what those functions mean. They are just functions. In fact, they don't have to mean anything at all.

All that they mean, is this:

  • f ∈ ο(g): f grows slower than g
  • f ∈ Ο(g): f does not grow significantly faster than g
  • f ∈ Θ(g): f grows as fast as g
  • f ∈ Ω(g): f does not grow significantly slower than g
  • f ∈ ω(g): f grows faster than g

It does not say anything about what f or g are, or what they mean.

Note that there are actually two conflicting, incompatible definitions of Ω; The one given here is the one that is more useful for computational complexity theory. Also note that these are only very broad intuitions, when in doubt, you should look at the definitions.

If you want, you can use Bachmann-Landau notation to describe the growth rate of a population of rabbits as a function of time, or the growth rate of a human's belly as a function of beers.

Or, you can use it to describe the best-case step complexity, worst-case step complexity, average-case step complexity, expected-case step complexity, amortized step complexity, best-case time complexity, worst-case time complexity, average-case time complexity, expected-case time complexity, amortized time complexity, best-case space complexity, worst-case space complexity, average-case space complexity, expected-case space complexity, or amortized space complexity of an algorithm.

10
AudioBubble On

These assertions are at best inaccurate and at worst wrong. Here is the truth.

  • The running time of an algorithm is not a function of N (as it also depends on the particular data set), so you cannot discuss its asymptotic complexity directly. All you can say is that it lies between the best and worst cases.

  • The worst case, best case and average case running times are functions of N, though the average case depends on the probabilistic distribution of the data, so is not uniquely defined.

Then the asymptotic notation is such that

  • O(f(N)) denotes an upper bound, which can be tight or not;

  • Ω(f(N)) denotes a lower bound, which can be tight or not;

  • Θ(f(N)) denotes a bilateral bound, i.e. the conjunction of O(f(N)) and Ω(f(N)); it is perforce tight.

So,

  • All of the worst-case, best-case and average-case complexities have a Θ bound, as they are functions; in practice this bound can be too difficult to establish and we content ourselves with looser O or Ω bounds instead.

  • It is absolutely not true that the Θ bound is reserved for the average case.

Examples:

The worst-case of quicksort is Θ(N²) and its best and average cases are Θ(N Log N). So by language abuse, the running time of quicksort is O(N²) and Ω(N Log N).

The worst-case, best-case and average cases of insertion sort are all three Θ(N²).

Any algorithm that needs to look at all input is best-case, average-case and worst-case Ω(N) (and without more information we can't tell any upper bound).

Matrix multiplication is known to be doable in time O(N³). But we still don't know precisely the Θ bound of this operation, which is N² times a slowly growing function. Thus Ω(N²) is an obvious but not tight lower bound. (Worst, best and average cases have the same complexity.)


There is some confusion stemming from the fact that the best/average/worst cases take well-defined durations, while tthe general case lies in a range (it is in fact a random variable). And in addition, algorithm analysis is often tedious, if not intractable, and we use simplifications that can lead to loose bounds.