What is the need for conversion of JVM objects to internal Spark SQL representation?

32 Views Asked by At

From documentation:

Used to convert a JVM object of type T to and from the internal Spark SQL representation.

I am python developer and the whole concept of encoders seem alien to me. I have 3 questions:

  1. By the mention of JVM object, does it mean JVM object binary representation?
  2. By the mention of JVM object, does it mean Tungsten Binary format, which is light weight and efficient?
  3. Why does Spark need to convert a JVM object of type T to/from internal Spark SQL representation? Is there any advantage in doing so?
0

There are 0 best solutions below