What functions are included in Big-O Notation?

1.2k Views Asked by At

I am learning about Big-O Notation and working on an assignment I am stuck on. Basically, I have been given different functions, and have to write the Big(O) for them. I think my confusion lies on what functions can be included in Big-O. I understand the hiearchy as follows: O(1) O(logn) O(n) O(nlogn) O(n^2) O(2^n) O(n!)

I also understand why constants and smaller terms are left out, since we are just looking for a bound. My question is what happens when a function is not written in these terms. For example (this is not my exact question but similar), 3^n is not a constant multiple of 2^n. Is the Big-O then O(3^n) or still O(2^n)? My thinking is O(3^n) as 3^n grows faster than 2^n and Big O is an upper bound. But I haven't seen Big O expressed with a base that isn't 2 or n as listed above. Is this correct thinking?

3

There are 3 best solutions below

0
On BEST ANSWER

For your specific question, O(3^n) is different from O(2^n). One way to see this is to look at the ratio of the values as n --> infinity. In this case, the ratio is:

(1.5)^n

And this grows without bound.

Similarly, n^3 is different from n^2 because the ratio is:

n

And this grows without bound as n --> infinity.

On the other hand, 3*n and 2*n are the same. Their ratio is:

1.5

This does not grow as n --> infinity.

It is very important to understand that not all functions are used for big-O. Basically, the "argument" in big-O represents a class of functions with the same asymptotic behavior as n --> infinity. The simplest member of the class is usually chosen for the notation.

Remember the big-O is the upper bound on complexity. When you are actually analyzing an algorithm, you might use more complex functions and include constants. In fact, these can be very important and explain why an algorithm such as bubble sort may be optimal for small data sets, even though it is not optimal for large data sets.

4
On

What functions are included in Big-O Notation?

All of them*.


However, some function are more commonly used, like O(logn) that you mentioned for example. The reason is the nature of the problems we attempt to solve with most of the algorithms (like sorting for example), which make convenient for us to use some functions as upper bounds more frequently than others.


PS: More specifically, it's a list of classes, where n reaches asymptotically Infinity. For more read Order of functions.

0
On

Formally f(x) = O(g(x)) if and only if there exists constants c>0 and n>0 such that:

c*g(x) > f(x) for all x greater or equal to n.

With this definition in mind, you can plug any function into the O and you will get a set of functions satisfying the property.

Another way to think about it is to define O as a function from functions to sets of functions. That is to say O(g) = { f : c*g(x) >= f(x) for all x > n} for constants c and n as described above. Using this definition one would say f is in O(g) which IMHO makes much more sense, but we are stuck with the notation using the equals sign for historical reasons.

Now, looking at the definition we can understand the hierarchy you mention. O(n^2) comes after O(n) because n = O(n^2) is true but n^2 = O(n) is false (note again I'm not using != as that = sign in the O notation is not a real equals but an arbitrary syntactic symbol you might as well replace with a dwarven rune).

Another thing, if you want to cheat in your assignment just answer O(n!) for everything and you'll be technically correct (the best kind of correct). Now, usually when people asks "what order is this function" the are asking for a tight bound, that is the smallest growing function that will still act as a bound. You might want to look the definitions for other related notations too.