How do I ensure consistent file sizes in datasets built in Foundry Python Transforms?

143 Views Asked by At

My Foundry transform is producing different amount of data on different runs, but I want to have similar amount of rows in each file. I can use DataFrame.count() and then coalesce/repartition, but that requires computing the full dataset and then either caching or recomputing it again. Is there a way for Spark to take care of this?

2

There are 2 best solutions below

0
On BEST ANSWER

You can use spark.sql.files.maxRecordsPerFile configuration option by setting it per output of @transform:

output.write_dataframe(
    output_df,
    options={"maxRecordsPerFile": "1000000"},
)
1
On

proggeo's answer is useful if the only thing you care about is the number of records per file. However, sometimes it is useful to bucket your data so Foundry is able to optimize downstream operations like Contour Analysis or other transforms.

In those cases you can use something like:

bucket_column = 'equipment_number'
num_files = 8
output_df = output_df.repartition(num_files, bucket_column)
output.write_dataframe(
    output_df,
    bucket_cols=[bucket_column],
    bucket_count=num_files,
)

If your bucket column is well distributed this will work to keep a similar number of rows per dataset file.