I have a Cassandra table(with historic data) which is around 5TB or more. To optimize infra cost, I need to offload old data to S3. I am looking at dsbulk unload
which is optimized for export but unsure if it will handle such large volume. There is another option to write custom application which queries data older than 3 years and creates CSV/Parquet and uploads to S3. Existing data model requires billions of queries in this case.
CREATE TABLE ingestion.alerts (
uuid uuid PRIMARY KEY,
payload text,
inc_id bigint,
group_id bigint,
timestamp timestamp
)
CREATE TABLE ingestion.alerts_by_day (
group_id bigint,
date text,
timestamp timestamp,
uuid uuid,
PRIMARY KEY ((group_id, date), timestamp)
)
There are less than 2000 group_ids and each group has data for 2000 day. Querying ingestion.alerts_by_day
4 million times is not big issue. I have to query ingestion.alerts_by_day
using group_id
and date
which gives me all alert UUIDs for day and I need to query individual alerts from ingestion.alerts
. One group is likely to have 100K to 1 million alerts in a day i.e. up-to 1 million reads from ingestion.alerts
. No option to update data model as cluster does not have adequate space to create another table and new nodes require high price.
Another interesting option is using spark with cassandra-connector, but question remains same: Will it be able to scan entire table to create export? This is likely to create high pressure in Cassandra cluster. Of course, once we migrate older data for the first time data volume will reduce drastically, it may be just 25% of original size.
When I upload data to S3, I am going to upload it in multiple files. I need to create file per day per group: alert-ingestion-group_id-ddmmyy. Probably I will create bucket for each group so that it contains data for one group and makes it easy to search through it later.
What are the tools/frameworks/libraries to look into for such large export? I am using datastax enterprise for Cassandra cluster 5.1 which has Cassandra 3.11. If price makes sense, I am open to paid service.
As a Datastax enterprise customer, do not hesitate to tap into support with a ticket and ask for guidance.
DSBulk is definitively an option as you can:
https://github.com/datastax/dsbulk/blob/1.x/manual/settings.md