I have a flat file with 3200000 rows and performance of application decreases as time progresses. For e.g 100000 rows in 30 sec to 320000 in total time of about 4 hours. I am using jdbc template for batch saving . Also tried async read with completable future database used = Mysql. with 2 indexes - 1 primary key , 1 unique index
expected lower time.
"Replacing" a table: If you are really "replacing" the entire table, this is much better.
For example, are you getting a replacement for your table every night? If so there is a much less invasive way to do such. (Not faster, but I suspect that is not the real issue.) The following keeps the
realtable "online" except for a very brief, ACID,RENAME.LOAD DATA INFILE (applies for either adding or replacing)
If the "flat file" is comma- or tab-separated, then see if
LOAD DATAcan be used. It is very fast.Batch loading
autocommit=ON.) (Saving 3.2M rows in the undo buffer is costly.)Other notes
Rethink whether you really need two unique indexes (the PK is one). That doubles some of the work of inserting each row.