I have been using the is
operator liberally in projects with nullable reference types:
// Works just fine
public CheckClass Foo(CheckClass? arg)
{
return arg is CheckClass
? arg
: new();
}
However, this throws a compile-time error:
// Cannot convert 'int?' to 'int'
public int Bar(int? arg)
{
return arg is int
? arg
: 0;
}
It's easy enough to fix by using the .Value
property on arg
. And as I understand it, nullable reference types are a "soft" compile-time verification, while Nullable<T>
is a "hard" generic class.
Still, it's a bit galling, and seems strange since Nullable<T>
gets preferential treatment in a lot of cases. The is
expression is perfectly capable of checking for the existence of a value, for one thing, and can produce a non-nullable object of the equivalent type, if it is given a name.
// Noooo problem
public int Bar(int? arg)
{
return arg is int val
? val
: 0;
}
Was this behavior explicitly chosen, or is it an oversight? If it's the former, what might be the reasoning behind it?
Basically yes, it was explicitly chosen. The design team decided not to represent nullable reference types (introduced with C# 8) as a separate type as it was done with nullable value types (introduced in .NET Framework 2.0) with
Nullable<T>
. So from compiler point of view nullable reference type is basically is a bunch of metainfromation which is analyzed at compile time, while nullable value types in this case require type conversions.This leads to the observed behavior which has more minimalistic repro:
This is obviously is not ideal and leads to several other problems - like unconstrained generic handling with nullable types.
Yes, because compiler recognizes it as basically a null check and transforms it to something like:
See for more info - Type testing with pattern matching and Pattern matching docs.
Notes:
int?
andint
soint?
is inferred, hence the compilation error.Bar
method is basicallyNullable<T>.GetValueOrDefault
, consider using it instead. Or just use the null-coalescing operator -arg ?? 0
.