Why Spark-BigQuery creates extra tables in the dataset

408 Views Asked by At

So i am running a spark (scala) serverless dataproc job that reads and write data from/in bigquery.

Here is the code that writes the data :

df.write.format("bigquery").mode(SaveMode.Overwrite).option("table", "table_name").save()

Everythings works fine but these extra tables got created on my dataset in addition of the final table. Do you know why and what i can do so i wont have them?

enter image description here

1

There are 1 best solutions below

2
On BEST ANSWER

Those tables are created as the result of view materialization or loading result from a query. The have an expiry time of 24 hours, configurable by the materializationExpirationTimeInMinutes option