How are writes managed in Spark with speculation enabled?

512 Views Asked by At

Let's say I have a Spark 2.x application, which has speculation enabled (spark.speculation=true), which writes data to a specific location on HDFS.

Now if the task (which writes data to HDFS) takes long, Spark would create a copy of the same task on another executor, and both the jobs would be running in parallel.

How does Spark handle this? Obviously both the tasks shouldn't be trying to write data at the same file location at the same time (which seems to be happening in this case).

Any help would be appreciated.

Thanks

1

There are 1 best solutions below

0
On

As I understand what is happening in my tasks:

  1. If one of the speculative tasks is finished, the other is killed
    When spark kills this task, it deletes temporary file written by this task
    So no data will be duplicated
  2. If you choose mode overwrite, some specilative tasks may fail with this exception:

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to CREATE_FILE /<hdfs_path>/.spark-staging-<...>///part-00191-.c000.snappy.parquet for DFSClient_NONMAPREDUCE_936684547_1 on 10.3.110.14 because this file lease is currently owned by DFSClient_NONMAPREDUCE_-1803714432_1 on 10.0.14.64 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2629)

I will continue to study this situation, so maybe the answer will be more helpful some day