Im looking for a good solution for storing data (one-time insert, no updates) and querying it based on large ranges. I'm assuming RDBMS are not good for me since Im looking for a large and scalable database.
I have been using Cassandra for this purpose and achieved a 70µs per row using and IN clause on several partition keys. I am using wide rows and each row is a couple of MB large.
Is this normal or am I doing something wrong? I couldn't find any actual numbers around the web.
My cluster is comprised of three EC2 machines of the c3.8xlarge type (32 vCPUs and 60GiB of RAM)
Im wondering if Cassandra is the best solution for me, and if so, whether can I speed the process of searching up.
EDIT: My client machine is also a c3.8xlarge EC2 machine. So the connectivity between the client and Cassandra is at least 10Gib/s
EDIT-2: Fully compacting the cluster did not help reduce read times.