I am trying to use redis as the source of spark sql, but got stuck with how to transform the rdd. Below are my codes:
RDD<Tuple2<String,String>> rdd1 = rc.fromRedisKV("user:*",3,redisConfig);
JavaRDD<Row> userRDD = rdd1.toJavaRDD().map(new Function<Tuple2<String,String>, Row>(){
public Row call(Tuple2<String, String> tuple2) throws Exception {
System.out.println(tuple2._2);
return RowFactory.create(tuple2._2().split(","));
}
});
List<StructField> structFields = new ArrayList<StructField>();
structFields.add(DataTypes.createStructField( "name", DataTypes.StringType, true ));
structFields.add(DataTypes.createStructField( "sex", DataTypes.StringType, false ));
structFields.add(DataTypes.createStructField( "age", DataTypes.IntegerType, false ));
StructType structType = DataTypes.createStructType(structFields);
Dataset ds = spark.createDataFrame(userRDD, structType);
ds.createOrReplaceTempView("user");
ds.printSchema();
String sql = "select name, sex, age from user ";
List<Row> list2 = spark.sql(sql).collectAsList();
I got the following exception:
Caused by: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
I have no idea what to do next, please help!
I finally found the reason: There is nothing wrong with my code, but I need to upload the jar of my application to Spark server.