Difference between Variance, Covariance, Contravariance, Bivariance and Invariance in TypeScript

7.1k Views Asked by At

Could you please explain using small and simple TypeScript examples what is Variance, Covariance, Contravariance, Bivariance and Invariance?

2

There are 2 best solutions below

1
On BEST ANSWER

Variance has to do with how a generic type F<T> varies with respect to its type parameter T. If you know that T extends U, then variance will tell you whether you can conclude that F<T> extends F<U>, conclude that F<U> extends F<T>, or neither, or both.


Covariance means that F<T> and T co-vary. That is, F<T> varies with (in the same direction as) T. In other words, if T extends U, then F<T> extends F<U>. Example:

  • Function or method types co-vary with their return types:

    type Co<V> = () => V;
    function covariance<U, T extends U>(t: T, u: U, coT: Co<T>, coU: Co<U>) {
      u = t; // okay
      t = u; // error!
    
      coU = coT; // okay
      coT = coU; // error!
    }
    

Other (un-illustrated for now) examples are:

  • objects are covariant in their property types, even though this not sound for mutable properties
  • class constructors are covariant in their instance types

Contravariance means that F<T> and T contra-vary. That is, F<T> varies counter to (in the opposite direction from) T. In other words, if T extends U, then F<U> extends F<T>. Example:

  • Function types contra-vary with their parameter types (with --strictFunctionTypes enabled):

    type Contra<V> = (v: V) => void;
    function contravariance<U, T extends U>(t: T, u: U, contraT: Contra<T>, contraU: Contra<U>) {
      u = t; // okay
      t = u; // error!
    
      contraU = contraT; // error!
      contraT = contraU; // okay
    }
    

Other (un-illustrated for now) examples are:

  • objects are contravariant in their key types
  • class constructors are contravariant in their construct parameter types

Invariance means that F<T> neither varies with nor against T: F<T> is neither covariant nor contravariant in T. This is actually what happens in the most general case. Covariance and contravariance are "fragile" in that when you combine covariant and contravariant type functions, its easy to produce invariant results. Example:

  • Function types that return the same type as their parameter neither co-vary nor contra-vary in that type:

    type In<V> = (v: V) => V;
    function invariance<U, T extends U>(t: T, u: U, inT: In<T>, inU: In<U>) {
      u = t; // okay
      t = u; // error!
    
      inU = inT; // error!
      inT = inU; // error!
    }
    

Bivariance means that F<T> varies both with and against T: F<T> is both covariant nor contravariant in T. In a sound type system, this essentially never happens for any non-trivial type function. You can demonstrate that only a constant type function like type F<T> = string is truly bivariant (quick sketch: T extends unknown is true for all T, so F<T> extends F<unknown> and F<unknown> extends T, and in a sound type system if A extends B and B extends A, then A is the same as B. So if F<T> = F<unknown> for all T, then F<T> is constant).

But Typescript does not have nor does it intend to have a fully sound type system. And there is one notable case where TypeScript treats a type function as bivariant:

  • Method types both co-vary and contra-vary with their parameter types (this also happens with all function types with --strictFunctionTypes disabled):

    type Bi<V> = { foo(v: V): void };
    function bivariance<U, T extends U>(t: T, u: U, biT: Bi<T>, biU: Bi<U>) {
      u = t; // okay
      t = u; // error!
    
      biU = biT; // okay
      biT = biU; // okay
    }
    

Playground link to code

0
On

A.S.: jcalz's answer is great from the technical perspective. I'd like to add some intuition to it.

When variance is relevant?

The concept of variance becomes useful when you are dealing with two types that are neither exactly identical to each other, nor completely unrelated to each other.

Here's why.

In this example, both value1 and value2 are of the same type — number, and so one is always assignable to the other and vice versa, without any errors:

declare let value1: number // for example: 42, or -3, or NaN, etc.
declare let value2: number // for example: 17, or Math.PI, or Infinity, etc.

value1 = value2 // no error
value2 = value1 // no error

Conversely, here value1 is a number, while value2 is a string. These types are as unrelated to each other as it can possibly get, and so assigning one to the other is always an error:

declare let value1: number // for example: 42, or 17, etc.
declare let value2: string // for example: "Hello world", "Lorem ipsum", etc.

value1 = value2 // Error!
value2 = value1 // Error!

In both of these cases, regardless of the assignment direction (value2 to value1 or vice versa), the result is always the same: always no error in the first case, and always an error in the second. That's why variance isn't at play here.

Variance becomes relevant when the assignability direction matters

When you have two types that are somewhat similar but not identical, that's when the order of assignment usually matters: assigning one value to the other is fine, but it doesn't work in reverse.

Let's see an example.

Here, the type Person only has one property name, while the type Student defines both name and graduationYear. That is, Student contains everything from Person, while Person covers Student only partially. Assigning a person to a student is an error, because a student is expected to have a graduation year, but Person only has name. However, assigning a student to a person works just fine, since a student, just like any person, has a name (regardless of whether it has any other properties):

type Person = { name: string }
type Student = { name: string, graduationYear: number }

declare let person: Person // for example: { name: 'Mike' }
declare let student: Student // for example: { name: 'Sofia', graduationYear: 2020 }

person = student // no error
student = person // Error!

This is not variance yet, though.

What is variance?

Variance is a measure of how the assignability between instances of a given generic correlates with the assignability between instances of its type parameters.

Okay, let's unpack that.

A generic is a type that is defined through another type. A good example of a generic is Array: Array<number> is not the same as Array<string>, even though both are arrays. The number in Array<number> and the string in Array<string> are type parameters (or type arguments) of a generic type Array<…>.

Any value can be an item of an array (even another array). Which means that Array does not impose any constraints on its type argument; and for any Value, the expression Array<Value> creates a perfectly valid, usable type. The question is, if I have Array<A> and Array<B>, which can be assigned to which?

That's variance.

Given the assignability between A and B, variance specifies the assignability between F<A> and F<B> (where F<…> is a generic).


Since assignability has direction, there are four possible situations:

  • wrapping a type in a generic keeps the assignability direction as it is ("covariance"; the generic varies in the same direction, i.e., it co-varies with its type argument);
  • wrapping a type in a generic reverses the direction ("contra-variance"; the generic varies in the opposite direction, i.e., it contra-varies with its type argument);
  • wrapping a type in a generic disallows both directions ("invariance"); and
  • wrapping a type in a generic allows both directions ("bivariance").

It may be helpful (if not slightly creepy) to compare this to blood types: type A contains anti-B antibodies; type B contains anti-A antibodies; type AB contains no antibodies; finally, type O contains both antibodies.