I got below error log while submitting the pyspark dataproc
job on creating recommendations.
18/09/15 06:11:36 INFO org.spark_project.jetty.server.Server: jetty-9.3.z-SNAPSHOT 18/09/15 06:11:36 org.spark_project.jetty.server.Server: Started @3317ms 18/09/15 06:11:37 INFO org.spark_project.jetty.server.AbstractConnector: StartedServerConnector@6322b8bd{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 18/09/15 06:11:37 INFO com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase: GHFS version: 1.6.8-hadoop218/09/15 06:11:38 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at cluster-d21a-m/10.128.0.4:8032 18/09/15 06:11:41 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_1536988234373_0004 18/09/15 06:11:46 WARN org.apache.spark.SparkContext: Spark is not running in local mode, therefore the checkpoint directory must not be on the local filesystem. Directory 'checkpoint/' appears to be on the local filesystem.read ... Traceback (most recent call last):File "/tmp/job- 614e830d/train_and_apply.py", line 50, in model = ALS.train(dfRates.rdd, 20, 20) # you could tune these numbers, but these are reasonable choices File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/mllib/recommendation.py", line 272, in train File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/mllib/recommendation.py", line 229, in_prepareFile "/usr/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1364, in firstValueError:RDD is empty/09/15 06:11:53 INFO org.spark_project.jetty.server.AbstractConnector: Stopped Spark@6322b8bd{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}`
Any suggestion ?