I'm reading some (apparently) large grib files using xarray. I say 'apparently' because they're ~100MB each, which doesn't seem too big to me. However, running
import xarray as xr
ds = xr.open_dataset("gribfile.grib", engine="cfgrib")
takes a good 5-10 minutes. Worse, reading one of these takes up almost 4GB RAM - something that surprises me given the lazy-loading that xarray is supposed to do. Not least that this is 40-odd times the size of the original file!
This reading time and RAM usage seems excessive and isn't scalable to the 24 files I have to read.
I've tried using dask and xr.open_mfdataset
, but this doesn't seem to help when the individual files are so large. Any suggestions?
Addendum: dataset looks like this once opened:
<xarray.Dataset>
Dimensions: (latitude: 10, longitude: 10, number: 50, step: 53, time: 45)
Coordinates:
* number (number) int64 1 2 3 4 5 6 7 8 9 ... 42 43 44 45 46 47 48 49 50
* time (time) datetime64[ns] 2011-01-02 2011-01-04 ... 2011-03-31
* step (step) timedelta64[ns] 0 days 00:00:00 ... 7 days 00:00:00
surface int64 0
* latitude (latitude) float64 56.0 55.0 54.0 53.0 ... 50.0 49.0 48.0 47.0
* longitude (longitude) float64 6.0 7.0 8.0 9.0 10.0 ... 12.0 13.0 14.0 15.0
valid_time (time, step) datetime64[ns] 2011-01-02 ... 2011-04-07
Data variables:
u100 (number, time, step, latitude, longitude) float32 6.389208 ... 1.9880934
v100 (number, time, step, latitude, longitude) float32 -13.548858 ... -3.5112982
Attributes:
GRIB_edition: 1
GRIB_centre: ecmf
GRIB_centreDescription: European Centre for Medium-Range Weather Forecasts
GRIB_subCentre: 0
history: GRIB to CDM+CF via cfgrib-0.9.4.2/ecCodes-2.9.2 ...
I've temporarily got around the issue by reading in the grib files, one-by-one, and writing them to disk as netcdf. xarray then handles the netcdf files as expected. Obviously it would be nice to not have to do this because it takes ages - I've only done this for 4 so far.