This isn't as much of a problem as it is a genuine question coming from a place of ignorance of how Python works under the veil.
I saw a bit on magic methods and decided to try writing some code with them to check out how exactly they were supposed to work and, through the trials and errors in which I did things wrong on purpose, I found that this code actually works:
class Vector:
def __init__(self):
self.n1 = 1
def __sub__(self, n):
return self.n1 + n
x = Vector()
print(x - 3)
Thing is, the code returns to me the value 4 - which means it is adding the two numbers instead of either telling me it can't solve this because I wrote the wrong sign or giving me a negative number as a result
At the same time, if I write the code like this:
class Vector:
def __init__(self):
self.n1 = 1
def __sub__(self, n):
return self.n1 + n
x = Vector()
print(x + 3)
... it automatically ceases working and says I'm doing something wrong (well, obviously, I'm trying to do addition in a subtraction method! It's not supposed to work!)
I was wondering why exactly Python even allowed my first slip-up to begin with. What is happening behind the veil for it to actually run?
Python doesn't care about what you do in a function, that's the whole point of letting you define the operators. the first time you defined __sub__ which is called when you do x - y and you decided to do addition when __sub__ is called. The second time around you still defined __sub__ but not __add__, so python can't call __add__ when you do x + y