I understand we can add current timestamp to a dataframe by doing this:
import org.apache.spark.sql.functions.current_timestamp
df.withColumn("time_stamp", current_timestamp())
However if we'd like to partition it by the current date at the point of saving it as a parquet file by deriving it from the timestamp without adding it to the dataframe, would that be possible? What I am trying to achieve would be something like this:
df.write.partitionBy(date("time_stamp")).parquet("/path/to/file")
You can't do that.
partitionBy
must specify the name of a column or columns of the dataset. In addition, when reading table data, spark implements Partition Discovery according to the storage structure.