Recently, I've started learning Data Structures with Python
list_ = [6, 4, 3, 2, 1, 7]
expected_sum_result = 9
def two_pair_sum_using_complement(array, expected_sum):
unique_numbers = set() #set
for num in array:
if num in unique_numbers:
return True
unique_numbers.add(expected_sum - num)
print(unique_numbers)
return False
print(
two_pair_sum_using_complement(array=list_, expected_sum=expected_sum_result))
In, the above code, you can simply say the BIG O is O(n) but what doubt I have is, that I've used the if num in unique_numbers in operator to have O(n) complexity and I referred to this site https://wiki.python.org/moin/TimeComplexity and I've attached the Screenshot.
So, May I consider I have a FOR loop that is O(n) and I have an in operator that has O(n) inside the FOR loop, so the complexity is O(n^2)?
I've tried by using some websites and AI and I can't understand the things easily.
A python set is typically implemented with a hash table.
This is a data structure in which lookup requires O(n) in the worst case, but only O(1) in the average case.
The basic idea of this data structure is that we have an array of buckets and a hash function that determines which values goes to which bucket.
The average case is encountered when the values are properly distributed between the buckets. See more info in the link above.
This matches the information in the page that you refered to:
The bottom line:
Since you have a linear loop and a set lookup per iteration, the overall complexity of your code is O(n2) in the worst case, but O(n) in the average one.
Note that the analysis above ignores the
printstatement which is not really a part of the algorithm.