I'm using Spout for reading an Excel file of over 500.000 records (with 7 columns each, not too much info).
The problem is my script is getting timmed out. I've tried uploading this limits and it gets better, but so far I couldn't make a complete insert, just partial ones arround 50.000 rows.
This is not an option for me. Is there any way to split this Excel file but on the code? What I see is that manipulating the file even if it's not inserting into database is already slow and times out.
So... any advice?
Thanks!
Reading a file with 3,500,000 cells is not going to be fast, no matter what. It will take at least a minute, if run on powerful hardware and if the Excel file uses inline strings.
So here are the options you have:
Splitting the file may work but needs to be done ahead of time (not in the same script, otherwise it will just add time to the total processing time...).
Hope that helps!