What tests can I perform to ensure I have overridden __setattr__ correctly?

429 Views Asked by At

I have been studying Python for a little while now, and I've come to understand that overriding __setattr__ correctly can be troublesome (to say the least!).

What are some effective ways to ensure/prove to myself the override has been done correctly? I'm specifically concerned about ensuring the override remains consistent with the descriptor protocol and MRO.

(Tagged as Python 3.x since that's what I am using, but the question is certainly applicable to other versions as well.)

Example code in which the "override" exhibits default behavior (but how can I prove it?):

class MyClass():
    def __setattr__(self,att,val):
        print("I am exhibiting default behavior!")
        super().__setattr__(att,val)

Contrived example in which the override violates the descriptor protocol (instance storage lookup occurs prior to the descriptor lookup - but how can I test it?):

class MyClass():
    def __init__(self,mydict):
        self.__dict__['mydict'] = mydict
    @property
    def mydict(self):
        return self._mydict
    def __setattr__(self,att,val):
        if att in self.mydict:
            self.mydict[att] = val
        else:
            super().__setattr__(att, val)

The ideal answer will provide a general test that will succeed when __setattr__ has been overridden correctly, and fail otherwise.

2

There are 2 best solutions below

3
On

In this case there's a simple solution: add a binding descriptor with a name that's in mydict and test that assigning to that name goes thru the descriptor (NB : Python 2.x code, I don't have a Python 3 install here):

class MyBindingDescriptor(object):
    def __init__(self, key):
        self.key = key

    def __get__(self, obj, cls=None):
        if not obj:
            return self
        return obj.__dict__[self.key]

    def __set__(self, obj, value):
        obj.__dict__[self.key] = value


sentinel = object()

class MyClass(object):
    test = MyBindingDescriptor("test")

    def __init__(self, mydict):
        self.__dict__['mydict'] = mydict
        self.__dict__["test"] = sentinel

    def __setattr__(self, att, val):
        if att in self.mydict:
            self.mydict[att] = val
        else:
            super(MyClass, self).__setattr__(att, val)


# first test our binding descriptor
instance1 = MyClass({})
# sanity check 
assert instance1.test is sentinel, "instance1.test should be sentinel, got '%s' instead" % instance1.test

# this one should pass ok
instance1.test = NotImplemented
assert instance1.test is NotImplemented, "instance1.test should be NotImplemented, got '%s' instead" % instance1.test

# now demonstrate that the current implementation is broken:
instance2 = MyClass({"test":42})
instance2.test = NotImplemented
assert instance2.test is NotImplemented, "instance2.test should be NotImplemented, got '%s' instead" % instance2.test
0
On

If you define overriding __setattr__ correctly as calling the __setattr__ of the parent class then you could graft your method into a class hierarchy that defines its own custom __setattr__:

def inject_tester_class(cls):
    def __setattr__(self, name, value):
        self._TesterClass__setattr_args.append((name, value))
        super(intermediate, self).__setattr__(name, value)
    def assertSetAttrDelegatedFor(self, name, value):
        assert \
            [args for args in self._TesterClass__setattr_args if args == (name, value)], \
            '__setattr__(name, value) was never delegated'
    body = {
        '__setattr__': __setattr__,
        'assertSetAttrDelegatedFor': assertSetAttrDelegatedFor,
        '_TesterClass__setattr_args': []
    }

    intermediate = type('TesterClass', cls.__bases__, body)
    testclass = type(cls.__name__, (intermediate,), vars(cls).copy())

    # rebind the __class__ closure
    def closure():
        testclass
    osa = testclass.__setattr__
    new_closure = tuple(closure.__closure__[0] if n == '__class__' else c
                        for n, c in zip(osa.__code__.co_freevars, osa.__closure__))
    testclass.__setattr__ = type(osa)(
        osa.__code__, osa.__globals__, osa.__name__,
        osa.__defaults__, new_closure)

    return testclass

This function jumps through a few hoops to insert an intermediate class that'll intercept any properly delegated __setattr__ call. It'll work even if you don't have any base classes other than the default object (which wouldn't let us replace __setattr__ for an easier path to test this).

It does make the assumption that you are using super().__setattr__() to delegate, where you used super() without arguments. It also assumes there is no meta class involved.

The extra __setattr__ is injected in a manner consistent with the existing MRO; the extra intermediate class is injected between the original class and the rest of the MRO, and delegates the __setattr__ call onwards.

To use this in a test, you'd produce a new class with the above function, create an instance then set attributes on that instance:

MyTestClass = inject_tester_class(MyClass)
my_test_instance = MyTestClass()
my_test_instance.foo = 'bar'
my_test_instance.assertSetAttrDelegatedFor('foo', 'bar')

If setting foo is not delegated, an AssertionError exception is raised, which the unittest test runner records as a test failure.