Scala variance positions - theory behind it?

210 Views Asked by At

Scala has notion of "variance position" and fancy rules around it, especially when variance is combined with method type bounds. Rules ensure type safety, one can read them in Scala lang spec, or see briefed in "Programming in Scala" 5ed:

To classify the positions, the compiler starts from the declaration of a type parameter and then moves inward through deeper nesting levels. Positions at the top level of the declaring class are classified as positive. By default, positions at deeper nesting levels are classified the same as that at enclosing levels, but there are a handful of exceptions where the classification changes. Method value parameter positions are classified to the flipped classification relative to positions outside the method, where the flip of a positive classification is negative, the flip of a negative classification is positive, and the flip of a neutral classification is still neutral.

Here we see some sort of "variance positions algebra". I understand those flipping, nesting - by trying in code, so my question is different.

Question:

imagine I want to create my language that supports variance (as rich as Scala, or rudimentary like Java's wildcard generics). Which theory (type theory?), which section of that theory I need to grasp to understand all the "mechanics" behind this "variance positions algebra"?

I seek for kind of more abstract, more general knowledge (not just list of given rules in Scala lang spec) that would permit me to see particular languages' variance implementations simply as special cases?

0

There are 0 best solutions below