Bewildered by Enumerable Min and Max extension methods behavior with uninitialized nullable types

235 Views Asked by At

could someone shed some light (or freely speculate) why the Linq Min and Max extension methods behave the way they do when dealing with uninitialized nullable types ?

Perhaps the easiest way is show what I have observed :

int? a = null;
int? b = 1;

Nullable.Compare(a, b);                            // -1, indicating null is smaller then 1
new int?[] { null, 1 }.OrderBy(x => x).First();    // null, null smaller then 1

new[] { 0, 1 }.Min();                              // 0 as expected

new int[] { }.Min();                               // invalidoperator exception as expected

So far so good, but then ...

new int?[] { null, 1 }.Max();                       // 1 as expected
new int?[] { null, 1 }.Min();                       // 1 ?????, why not null ?

new int?[] { null, null }.Min();                    // null, why null now and not above ?

new int?[] { }.Min();                               // null, and no exception ???

I admit, I don't particularly care for Nullable types in the first place, but that's another story :-)

I do remain curious though about why it is implemented this way...

Cheers, Bart

1

There are 1 best solutions below

2
On

You can decompile the Nullable source code, which results in this definition:

    public static int Compare<T>(Nullable<T> n1, Nullable<T> n2) where T : struct
    {
        if (n1.HasValue) {
            if (n2.HasValue) return Comparer<T>.Default.Compare(n1.value, n2.value);
            return 1;
        }
        if (n2.HasValue) return -1;
            return 0;
    }

It looks like the nullable.compare method intentionally returns a -1 in the case of n1 being null and n2 having a value.

Likewise you can decompile the linq source code, which results in this definition:

    public static int? Min(this IEnumerable<int?> source) {
        if (source == null) throw Error.ArgumentNull("source");
        int? value = null;
        foreach (int? x in source) {
            if (value == null || x < value)
                value = x;
        }
        return value;
    }

The iteration over the source object checks if the value is null, and if it is, assigns the x to value*. It's just the default implementation of linq to return a number as the min value during a comparison against null.

As to the question why it does that, I assume it's because null represents the absence of a value rather than an integral value, and they (the guys who wrote linq) disagreed with the way comparable was originally built. They probably found no normal execution path where null would be considered the best answer to "what is the minimum value of this list of nulls and ints".