In DolphinDB, how to prevent job failures caused by concurrent job writes to the same table?

27 Views Asked by At

When multiple jobs are submitted simultaneously, they may compete to write to the same chunk of a table, leading to job failures. The following is my script:

loadTable("dfs://...", "...").append!(memTable)

Is there a way to introduce a lock wait condition to prevent conflicts among jobs?

1

There are 1 best solutions below

0
On BEST ANSWER

You can set the atomicity to be guaranteed at the ‘CHUNK’ level. For details, see https://docs.dolphindb.cn/en/help200/FunctionsandCommands/CommandsReferences/s/setAtomicLevel.html?highlight=setatomiclevel

In this way, if a transaction tries to write to multiple chunks and a write-write conflict occurs as a chunk is locked by another transaction, instead of aborting the writes, the transaction will keep writing to the non-locked chunks and keep attempting to write to the chunk in conflict until it is still locked after a few minutes. Therefore, setting atomic ='CHUNK' means concurrent writes to a chunk are allowed. As the atomicity at the transaction level is not guaranteed, the write operation may succeed in some chunks but fail in other chunks. Please also note that the write speed may be impacted by the repeated attempts to write to the chunks that are locked.