I need to convert a csv file to Parquet format. But this csv file is very huge (more than 65 000 rows and 1 000 columns), that's why I need to divide my parquet file into several subfiles by 5 000 rows and 200 columns in each one). I have already tried partition_on and row_group_offsets, but it doesn't work.
My code :
import pandas as pd
import fastparquet as fp
df = pd.read_csv('D:\Users\mim\Desktop\SI\LOG\LOG.csv')
fp.write(r'D:\Users\mim\Desktop\SI\newdata.parq', df)
[CORRECT ANSWER] :