Mysql capacity to handle billions of rows

7.6k Views Asked by At

I have this machine: Core 2 CPU 6600, 4GB, 64 bit system, Windows VISTA.

I am designing a system with 10 billion rows, this table has a foreign key to another table, which should contains 10x10 billion rows. Normally, I just do insert into two tables. I don't usually do joins.

I don't need user-facing real time performance. I wonder if mysql can handle this size with stability and reasonable performance.

Thanks a lot

2

There are 2 best solutions below

0
On

It depends on which engine you are using. In this post you can find additional informations:

Maximum number of records in a MySQL database table

In general, I would suggest you to use another OS different from VISTA if you can, mysq is best tuned for linux boxes,

Also, what I would suggest you is to try to make some benchmarks before inserting all the rows.

Look here for more references:

http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function%5Fbenchmark

0
On

The deciding factor here will be what data types you are using in your fields. 10 billion x 10 columns of text fields and image blobs would be orders of magnitude larger than 10 columns of int(2).

I also agree that Vista is asking for trouble with billions of rows. It might work in theory but if you have a large number of clients it will probably crash and burn under load.