repartition in memory vs file

228 Views Asked by At

repartition() creates partition in memory and is used as a read() operation. partitionBy() creates partition in disk and is used as a write operation.

  1. How can we confirm there is multiple files in memory while using repartition()
  2. If repartition only creates partition in memory articles.repartition(1).write.saveAsTable("articles_table", format = 'orc', mode = 'overwrite') , why does this operation only creates one file? And how is this different from partitionBy()?
2

There are 2 best solutions below

2
On BEST ANSWER

partitionBy indeed has an effect on how your files will look on disk, and indeed is used when writing a file (it is a method of the DataFrameWriter class).

That, however, does not mean that the repartition has no effect at all on what will be written to disk.

Let's take the following example:

df = spark.createDataFrame([
  (1,2,3),
  (2,2,3),
  (3,20,300),
  (1,24,299),
  (5,26,312),
  (5,28,322),
  (5,9,2)
], ["colA", "colB", "colC"])

df.write.partitionBy("colA").parquet("using_partitionBy.parquet")
df.repartition(4).write.parquet("using_repartition.parquet")

In here, we create a simple dataframe and write it away using 2 methods:

1) By using partitionBy

The output file structure on disk looks like this:

tree using_partitionBy.parquet/
using_partitionBy.parquet/
├── colA=1
│   ├── part-00000-17e47d08-2bbe-4ad6-809e-c466b396de4e.c000.snappy.parquet
│   └── part-00002-17e47d08-2bbe-4ad6-809e-c466b396de4e.c000.snappy.parquet
├── colA=2
│   └── part-00001-17e47d08-2bbe-4ad6-809e-c466b396de4e.c000.snappy.parquet
├── colA=3
│   └── part-00001-17e47d08-2bbe-4ad6-809e-c466b396de4e.c000.snappy.parquet
├── colA=5
│   ├── part-00002-17e47d08-2bbe-4ad6-809e-c466b396de4e.c000.snappy.parquet
│   └── part-00003-17e47d08-2bbe-4ad6-809e-c466b396de4e.c000.snappy.parquet
└── _SUCCESS

We see that this created 6 "subfiles", in 4 "subdirectories". Information about the data values (like colA=1) is actually stored on disk. This enables you to do big improvements in subsequent queries that would need to read this file. Imagine that you would need to read all the values where colA=1, that would be a trivial task (ignore the other subdirectories).

2) By using repartition(4)

The output file structure on disk looks like this:

tree using_repartition.parquet/
using_repartition.parquet/
├── part-00000-6dafa06d-cea6-4af0-ba0d-9c761de0fd50-c000.snappy.parquet
├── part-00001-6dafa06d-cea6-4af0-ba0d-9c761de0fd50-c000.snappy.parquet
├── part-00002-6dafa06d-cea6-4af0-ba0d-9c761de0fd50-c000.snappy.parquet
├── part-00003-6dafa06d-cea6-4af0-ba0d-9c761de0fd50-c000.snappy.parquet
└── _SUCCESS

We see that 4 "subfiles" were created and NO "subdirectories" were made. Actually these "subfiles" represent your partitions inside of Spark. Since you're typically working with very big data in Spark, all your data has to be partitioned some way.

Each partition will be processed by 1 task, which can be taken up by 1 core of your cluster. Once this task is taken up by a core and after doing all the necessary processing, your core will write away this output on disk in one of these "subfiles". When it has finished writing away this "subfile", your core is ready to read another partition.

When to use partitionBy and repartition

This is a bit opinionated and surely not exhaustive, but might give you some insight into what to use.

partitionBy and repartition can be used for different goals:

  • Use partitionBy if:
    • You want to write data on disk on which you want to have big performance benefits to read. This will mostly be useful when you have a column you will do lots of filtering on whose cardinality is not too high
  • Use repartition if:
    • You want to tune the size of your partitions to your cluster size, to improve performance on your jobs
    • You want to write away a file with a partition size that makes sense, but using partitionBy on any column would have a way too high cardinality (imagine time series data on sensors)
3
On
  1. You mean how to confirm that when you do for example .repartition(100) you will get 100 files on output? I was checking it in SparkUI, number of tasks = number of partitions = number of written files

  2. With .repartition(1) you are moving whole dataset to one partition, which will be processed as 1 task by one core and written to one file. There is no way to process single task in paralell so Spark has no choice but to store everything in one file