I have a weird situation where by changing T.self
inside a generic function to T.self as T.Type
, it changes the semantics of the code:
class Foo {
required init() {}
}
class Bar : Foo {
}
func f<T: Foo>(_:T) -> T {
return T.self()
}
println(f(Bar())) // prints MyProject.Foo
but
class Foo {
required init() {}
}
class Bar : Foo {
}
func f<T: Foo>(_:T) -> T {
return (T.self as T.Type)()
}
println(f(Bar())) // prints MyProject.Bar
This doesn't make sense. The code uses T.self
to create an instance of the class of T
. Although T
could be inferred to Foo
or Bar
in the call to f
in both cases, I would expect it to be inferred to the same thing in both cases, since inference of the type argument should only on the signature and calling code, and the signature and calling code are identical in both cases.
T.self
should already be of type T.Type
, so casting it should be a no-op (in fact, the compiler shouldn't even allow the cast, since it should always be true). Yet by performing this cast, I seem to be changing the class that I am calling the initializer on. Casting an object should not alter the value of the object if it succeeds, so this is really weird.
There's a thread on this in the dev forums right now. The most relevant bits basically point to this being a result of Swift's parametric type system:
There's a lot more exploration and explanation in that thread.