Why is array element referencing a constant time operation?

58 Views Asked by At

Let me briefly explain how I think array element referencing works.

Start address of the array + Data type size*index of the element to be fetched = Address of the required array element

Basically, the above operation is executed when suppose b[i] is written (where b is an array and i is the index of the desired element). This operation is supposed to be a constant-time operation. However, I think otherwise. Consider the following examples,

Example 1

579015713945 (start address) + 16 (datatype size) * 2789280 (index of the required element) = Address of the required element.

Example 2

579015713945 (start address) + 16 (datatype size) * 1 (index of the required element) = Address of the required element.

Example 3

100 (start address) + 8 (datatype size) * 1 (index of the required element) = Address of the required element.

Will a machine not be able to compute examples 2 and 3 faster than example 1? So, how is the operation a constant-time operation then when there are variable amounts of time needed?

I computed the value of example 1 and 3 on colab, and both took 0 seconds to compute (haha). Particular numbers do not matter here. I could have made the numbers much bigger to make the execution time difference between example 1 and example 3 greater. Then example 1 probably would have taken 1 second, and example 3 would have taken 0.1 second. At any rate, I think I have been able to convey my general question.

2

There are 2 best solutions below

0
Chris Dodd On BEST ANSWER

ALUs in CPUs are fixed size and operate on a fixed number of bits -- on a 64-bit machine, the ALUs will be 64 bits and will do an add or multiply1 of 64 bit values in constant time. Thus as long as your addresses and indexes are less that 264 (which they must be to be valid addresses), the time will be constant


1On some machines, multiplies of some corner case values (such as mulitplying by 0) may take less time, bit this is usually ignored when talking about complexity or Big-O time. The distinction may be important in other domains (such a cryptography), so you need to be careful about exactly what "constant" means in your domain. It varies.

1
Francesco Derme On

O(1) time just means that you can give an upper bound to how long the program will take to run which isn't affected by any of the input parameters.

Compare that with liner time where the upper bound of the time taken can be expressed as mn + k for some values of m and k where n is the size of the input.

Note that just because the program runs in O(1) it doesn't mean it will take the same amount of time for any input data.

This is the original and more complete answer to a similar question

Edit thanks to Chris Dodd: generally speaking, "constant time" is used to mean the same thing as "O(1) time", but in some contexts constant time might mean truly and repeatedly identical time for all possible keys, if that's what you meant then this answer is invalid.