Unclear descriptor caller reference evaluation

75 Views Asked by At

I am using Python descriptors to create complex interfaces on host objects.

I don't get the behaviour I would intuitively expect when I run code such as this:

class Accessor(object):
    def __get__(self,inst,instype):
        self._owner = inst
        return self

    def set(self,value):
        self._owner._val = value

    def get(self):
        if hasattr(self._owner,'_val'):
            return self._owner._val
        else: return None


class TestClass(object):
    acc = Accessor()

source = TestClass()
destination = TestClass()

source.acc.set('banana')
destination.acc.set('mango')

destination.acc.set(source.acc.get())
print destination.acc.get()
# Result: mango

I would expect in this case for destination.acc.get() to return 'banana', not 'mango'.

However, the intention (to copy _val from 'source' to 'destination') works if the code is refactored like this:

val = source.acc.get()
destination.acc.set(val)
print destination.acc.get()
# Result: banana

What is is that breaks down the 'client' reference passed through get if descriptors are used in a single line versus broken into separate lines? Is there a way to get the behaviour I would intuitively expect?

Many thanks in advance.

K

2

There are 2 best solutions below

4
On

Your implementation ALMOST works. The problem with it comes up with destination.acc.set(source.acc.get()). What happens is that it first looks up destination.acc, which will set _owner to destination, but before it can call set(), it has to resolve the parameter, source.acc.get(), which will end up setting acc's _owner to source.

Since destination.acc and source.acc are the same object (descriptors are stored on the class, not the instance), you're calling set() on it after its _owner is set to source. That means you're setting source._val, not destination._val.

The way to get the behavior you would intuitively expect is to get rid or your get() and set() and replace them with __get__() and __set__(), so that your descriptor can be used for the reason a descriptor is used.

class Accessor(object):
    def __get__(self, instance, owner): # you should use the conventional parameter names
        if instance is None:
            return self
        else:
            return instance._val

    def __set__(self, instance, value):
        instance._val = value

Then you could rewrite your client code as

source = TestClass()
destination = TestClass()

source.acc = 'banana'
destination.acc = 'mango'

destination.acc = source.acc
print destination.acc

The point of descriptors is to remove explicit getter and setter calls with implicit ones that look like simple attribute use. If you still want to be using your getters and setters on Accessor, then don't make it a descriptor. Do this instead:

class Accessor(object):
    def get(self):
        if hasattr(self, '_val'):
            return self._val
        else:
            return None
    def set(self, val):
        self._val = val

Then rewrite TestClass to look more like this:

class TestClass(object):
    def __init__(self):
        self.acc = Accessor()

After that, your original client code would work.

0
On

I already said why it's not working in my other post. So, here's a way to use a descriptor while still retaining your get() and set() methods.

class Accessor(object):
    def __get__(self, instance, owner):
        if instance is None:
            return self
        elif not hasattr(instance, '_val'):
            setattr(instance, '_val', Acc())
        return getattr(instance, '_val')

class Acc(object):
    def get(self):
        if hasattr(self, '_val'):
            return self._val
        else:
            return None
    def set(self, val):
        self._val = val

class TestClass(object):
    acc = Accessor()

The trick is to move the get() and set() methods to a new class that is returned instead of returning self from the descriptor.