I have a csv.gz file that (from what I've been told) before compression was 70GB in size. My machine has 50GB of RAM, so anyway I will never be able to open it as a whole in R.
I can load for example the first 10m rows as follows:
library(vroom)
df <- vroom("HUGE.csv.gz", delim= ",", n_max = 10^7)
For what I have to do, it is fine to load 10m rows at the time, do my operations, and continue with the next 10m rows. I could do this in a loop.
I was therefore trying the skip argument.
df <- vroom("HUGE.csv.gz", delim= ",", n_max = 10^7, skip = 10^7)
This results in an error:
Error: The size of the connection buffer (131072) was not large enough
to fit a complete line:
* Increase it by setting `Sys.setenv("VROOM_CONNECTION_SIZE")`
I increased this with Sys.setenv("VROOM_CONNECTION_SIZE" = 131072*1000), however, the error persists.
Is there a solution to this?
Edit: I found out that random access to a gzip compressed csv (csv.gz) is not possible. We have to start from top. Probably the easiest is to decompress and save, then skip should work.
I haven't been able to figure out
vroomsolution for very large more-than-RAM (gzipped) csv files. However, the following approach has worked well for me and I'd be grateful to know about approaches with better querying speed while also saving disk space.splitsub-command inxsvfrom https://github.com/BurntSushi/xsv to split the large csv file into comfortably-within-RAM chunks of say, 10^5, lines and save them in a folder.data.table::freadone-by-one (to avoid low-memory error) using aforloop and save all of them into a folder as compressedparquetfiles usingarrowpackage which saves space and prepares the large table for fast querying. For even faster operations, it is advisable to re-save theparquetfiles partitioned by the fields by which you need to frequently filter.arrow::open_datasetand query that multi-file parquet folder usingdplyrcommands. It takes minimum disk space and gives the fastest results in my experience.I use
data.table::freadwith explicit definition of column classes of each field for fastest and most reliable parsing of csv files.readr::read_csvhas also been accurate but slower. However, auto-assignment of column classes byread_csvas well as the ways in which you can custom-define column classes byread_csvis actually the best - so less human-time but more machine-time - which means that it may be faster overall depending on scenario. Other csv parsers have thrown errors for the kind of csv files that I work with and waste time.You may now delete the folder containing chunked csv files to save space, unless you want to experiment loop-reading them with other csv parsers.
Other previously successfully approaches: Loop read all csv chunks as mentioned above and save them into:
disk.framepackage. Then that folder may be queried usingdplyrordata.tablecommands explained in the documentation. It has facility to save in compressedfstfiles which saves space, though not as much asparquetfiles.DuckDBdatabase which allows querying withSQLordplyrcommands. Using database-tables approach won't save you disk space. ButDuckDBalso allows querying partitioned/un-partitioned parquet files (which saves disk space) withSQLcommands.EDIT: - Improved Method Below
I experimented a little and found a much better way to do the above operations. Using the code below, the large (compressed) csv file will be chunked automatically within R environment (no need to use any external tool like
xsv) and all chunks will be written inparquetformat in a folder ready for querying.If, instead of
parquet, you want to use -disk.frame, the callback function may be used to create chunked compressedfstfiles fordplyrordata.tablestyle querying.DuckDB, the callback function may be used toappendthe chunks into a database table forSQLordplyrstyle querying.By judiciously choosing the
chunk_sizeparameter ofreadr::read_csv_chunkedcommand, the computer should never run out of RAM while running queries.PS: I use
gzipcompression forparquetfiles since they can then be previewed withParquetViewerfrom https://github.com/mukunku/ParquetViewer. Otherwise,zstd(not currently supported byParquetViewer) decompresses faster and hence improves reading speed.EDIT 2:
I got a csv file which was really big for my machine: 20 GB gzipped and expands to about 83 GB, whereas my home laptop has only 16 GB. Turns out that the
read_csv_chunkedmethod I mentioned in earlier EDIT fails to complete. It always stops working after some time and does not create allparquetchunks. Using my previous method of splitting the csv file withxsvand then looping over them creatingparquetchunks worked. To be fair, I must mention it took multiple attempts this way too and I had programmed a check to create only additionalparquetchunks when running the program on successive attempts.EDIT 3:
VROOM does have difficulty when dealing with huge files since it needs to store the index in memory as well as any data you read from the file. See development thread https://github.com/r-lib/vroom/issues/203
EDIT 4:
Additional tip: The chunked parquet files created by the above mentioned method may be very conveniently queried using SQL with DuckDB method mentioned at https://duckdb.org/docs/data/parquet and https://duckdb.org/2021/06/25/querying-parquet.html
DuckDB method is significant because R Arrow method currently suffers from a very serious limitation which is mentioned in the official documentation page https://arrow.apache.org/docs/r/articles/dataset.html.
Specifically, and I quote: "In the current release, arrow supports the
dplyrverbsmutate(), transmute(), select(), rename(), relocate(), filter(), andarrange(). Aggregation is not yet supported, so before you callsummarise()or other verbs with aggregate functions, usecollect()to pull the selected subset of the data into an in-memory R data frame."The problem is that if you use
collect()on a very big dataset, the RAM usage spikes and the system crashes. Whereas, using SQL statements to do the same aggregation job on the same big-dataset with DuckDB does not cause RAM usage spikes and does not cause system crash. So until Arrow fixes itself for aggregation queries for big-data, SQL from DuckDB provides a nice solution to querying big datasets in chunked parquet format.