MySQL memory usage for large database

762 Views Asked by At

We have a MySQL DB (OLD_DATA) where many partitioned table size has gone over 100GB. To improve server performance we thought of creating a parallel DB (NEW_DATA) and start collecting new data in NEW_DATA.

The DBs are MyISAM and the server has 96GB RAM.

After this the OLD_DATA will not be accessed.

Will this approach help in improving server performance in terms of RAM and CPU usage? Will the data from OLD_DATA be loaded in memory?

1

There are 1 best solutions below

0
On

Please provide SHOW CREATE TABLE. If it is, as you hinted, PARTITIONed, then we need to factor that into the analysis.

Generally if "old" rows are no longer accessed in any way, they do not hurt, and there would be no need to do what you did. Please elaborate on the queries that you feel are "slow", preferably by providing EXPLAIN SELECT.

MyISAM caches index blocks in the "key_buffer"; how big are the indexes? What is key_buffer_size set to? Data blocks are cached by the OS.

It is quite 'normal' for a system to have a dataset that are many times as big as RAM. Performance depends on the 'working set' of the dataset and on how actively you are querying the data. You have implied that "old" rows are not used, hence not part of the 'working set'.

On the other hand, if you have "table scans", the entire table is part of the working set. It is quite possible that we can advise on how to change those queries to be nicer.