Dealing with Immutable disposable objects

307 Views Asked by At

Given a Disposable Immutable class that holds something sometimes big and that you don't know if there is a side effect or an exception when the object is disposed twice, (and I don't hold the code ownership to modify it to account this situation) what is the best approach how to deal with chained transformations?

Using a Bitmap as example.

        public static Bitmap Transform(Bitmap src, RotateFlipType rotate = RotateFlipType.RotateNoneFlipNone, 
                double scale = 0, int pad = 0, int alterGradient = 0)
        {
            using Bitmap rotated = src.Rotate(rotate);
            using Bitmap scaled = MyImageUtils.ScaleBitmap(rotated, scale);
            using Bitmap padded = MyImageUtils.PaddBitmap(scaled, scale);
            //The owner is the caller
            Bitmap result = MyImageUtils.Gradient(padded, alterGradient);
            return result;
        }

If you need to create a new bitmap with the transformation, it makes sense to take that memory, but if the transformation would have no effect (RotateFlipNone, scale = 0 or pad = 0), it makes no sense to create a new bitmap. I find myself creating clones for the sake of returning a new Disposable object on every transformation instead of returning the same input object.

The same situation would apply for example to a Date object if it would be Disposable and you would need to perform n operations, where some of them have no effect depending on the input params, (Add zero days).

The point is, some operations have no effect depending on the input parameters, still is easier to create a new object than to keep track which using is the first owner of the object and have the beforehand knowledge of the API you are using if some parameter would really create a different item or just a copy.

  • Is there a pattern for this kind of situation?
  • Does using keep account that the object reference it holds belongs to another using so it won't dispose it twice or an ObjectDisposedException would be thrown?
  • Is having a new object every time the safest approach even if it takes more computation and memory? (It looks like the most readable from my point of view)

An option it came up to my mind would be to have a Disposable wrapper class that ensures that the object it holds is not disposed twice, but that means that I need to know beforehand if the transformation has zero effect so I won't call it or the transformation functions know about this wrapper mechanism. Something like:

    public class DisposableOnce<T> : IDisposable
        where T : IDisposable
    {
        private bool disposedValue;

        public delegate void DisposedDelegate(EventArgs e);
        public event DisposedDelegate? OnDisposed;

        public T Value { get; }
        private readonly DisposableOnce<T>? Other;
        public DisposableOnce(T value)
        {
            Value = value;
        }

        public DisposableOnce(DisposableOnce<T> disposableOther)
        {
            Value = disposableOther.Value;
            Other = disposableOther;
            Other.OnDisposed += OnRefDisposed;
        }

        private void OnRefDisposed(EventArgs e)
        {
            SetDisposed();
        }

        public void SetDisposed()
        {
            disposedValue = true;
            try
            {
                OnDisposed?.Invoke(new EventArgs());
            }
            catch (Exception ex)
            {
                //Shallow the exception to avoid propagation?
                throw ex;
            }
        }

        protected virtual void Dispose(bool disposing)
        {
            if (!disposedValue)
            {
                if (disposing)
                {
                    Value.Dispose();
                    if (Other != null)
                    {
                        //Not listening you anymore
                        Other.OnDisposed -= OnRefDisposed;
                    }
                }
                SetDisposed();
            }
        }

        public void Dispose()
        {
            Dispose(disposing: true);
            GC.SuppressFinalize(this);
        }
    }

And it would be used like:

        public static Bitmap Transform(Bitmap src, RotateFlipType rotate = RotateFlipType.RotateNoneFlipNone, double scale = 0, int pad = 0, int alterGradient = 0)
        {
            using DisposableOnce<Bitmap> rotated = new DisposableOnce<Bitmap>(src.Rotate(rotate));
            using DisposableOnce<Bitmap> scaled = scale == 0 ? new DisposableOnce<Bitmap>(rotated) : new DisposableOnce<Bitmap>(MyImageUtils.ScaleBitmap(rotated.Value, scale));
            using DisposableOnce<Bitmap> padded = pad == 0 ? new DisposableOnce<Bitmap>(scaled) : new DisposableOnce<Bitmap>(MyImageUtils.PaddBitmap(scaled.Value, scale));
            Bitmap result;
            if (alterGradient == 0)
            {
                //Avoid the value being disposed by the wrapper relatives
                padded.SetDisposed();
                result = padded.Value;
            }
            else
            {
                result = MyImageUtils.Gradient(padded.Value, alterGradient);
            }
            return result;
        }

This is way bigger, confusing, requires way more knowledge of each of the transform functions (+ big list of nono reasons).

My best guess is to stay with the initial transform unless there is a real performance issue, but wondering if some elegant solution exists for:

  • A function that sometimes returns a new instance and sometimes returns back the given IDisposable input parameter itself.
  • Instead of returning a new instance always to avoid disposing twice
1

There are 1 best solutions below

1
On BEST ANSWER

I would argue that disposable objects are not truly immutable. If they where, there would not be any issue with returning the original object or a new object. As it is not immutable I would argue you should always do the same thing, and never try to optimize by returning the original object.

Is there a pattern for this kind of situation?

There are a few patters for reducing the impact of creating new objects

  1. Modify objects in place - This is probably the simplest option with the least overhead. But it can make the code harder to understand, and it may not always be applicable.
  2. Object pooling - Keep a list of images that can be reused to avoid the overhead of creating the object. This could allow for "popsicle immutability" where you modify an object and then freeze it, only unfreezing it when it is returned to the pool.
  3. Let the caller supply the object to write the result to - This moves the responsibility of object allocation from the method to the caller, and the caller should be in a better position to decide the optimal strategy.

Does using keep account that the object reference it holds belongs to another using so it won't dispose it twice or an ObjectDisposedException would be thrown?

No, using is just a shorthand for try{....} finally{myObject.Dispose()}. However, well designed objects should not care if they are disposed twice. If some third party object does not follow this practice, using a wrapper might be reasonable. But most objects should not need such wrappers, and I would make such a wrapper much simpler than you propose, since it just needs a flag to tell if it has been disposed already or not.

Is having a new object every time the safest approach even if it takes more computation and memory? (It looks like the most readable from my point of view)

I would consider immutable objects to be easier to use and understand, and this would require creating new object instead of mutating it. It does however have a performance impact. If this impact is significant or not depends on the specific application. When doing things like image processing it is somewhat common for ease of use to have less priority than performance.