What I'm trying to achieve is to execute Scala code. Convert result Scala RDD[Row] to PySparkRDD of Rows. Perform some python operations and convert RDD of pySpark Rows back to Scala's RDD[Row]. To get RDD to pySpark RDD I'm doing this: In Scala I have this method
import org.apache.spark.sql.execution.python.EvaluatePython.{javaToPython, toJava}
def toPythonRDD(rdd: RDD[Row]): JavaRDD[Array[Byte]] = {
javaToPython(rdd.map(r => toJava(r, r.schema)))
}
Later in pySpark I create new RDD calling
RDD(jrdd, sc, BatchedSerializer(PickleSerializer()))
I end up with RDD of pySpark Rows. I'd like to revert that process. I can easily get Scala's JavaRDD[Array[Byte]] by accessing rdd._jrdd. My main problem is that I don't know hwo to convert/unplickle it back to RDD[Row]. I've tried
sc._jvm.SerDe.pythonToJava(rdd._to_java_object_rdd(), True)
and
sc._jvm.SerDe.pythonToJava(rdd._jrdd, True)
both crash with similar exception
net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for pyspark.sql.types._create_row)
net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for pyspark.sql.types._create_row)
I know that I can easily pass DF back and forth between Scala and Python, but my records don't have uniform schema. I'm using RDD of Row's, because I though there will already be a pickler I'd be able to reuse and it works, but so far in only one direction.