I have an ETL process that it's deleting a couple hundred thousand rows from a table with 18 billion rows using a unique hashed surrogate key like: 1801b08dd8731d35bb561943e708f7e3
delete from CUSTOMER_CONFORM_PROD.c360.engagement
where (
engagement_surrogate_key) in (
select (engagement_surrogate_key)
from CUSTOMER_CONFORM_PROD.c360.engagement__dbt_tmp
);
This is taking from 4 to 6 minutes each time on a Small warehouse. I have added a clustering key on the engagement_surrogate_key but since it's unique with high cardinality it didn't help. I have also enabled search optimization service but that also didn't help and it's still scanning all partitions. How can I speed up the deletion?
The deletion can be speed up limiting the scan on the destination table by adding a date range, for example, filtering for only the past month worth of data:
loaded_date>=dateadd(MM, -1, current_date). If you are using dbt they have implemented that functionality using this macro:So you can add the predicate to the dbt incremental model config like this:
When you run your model, the code generated will be this: