cfindex of a large document set (35K docs); is there a benefit to setting useColdSearcher to true to avoid "maxWarmingSearchers exceeded" error? Running the index rebuild from CF Admin would end with no error explanation. Doing a purge and update of the entire directory was erroring: maxWarmingSearchers exceeded. I wrote a routine to get all the files and individually add them, with a dynamically increasing delay to let Solr finish each document as the index got bigger
<cfset delay=1000>
<cfdirectory action="list" directory="#dir#files" name="qFiles" >
<cfoutput query="qFiles">
<cfindex action="update"
collection="myColl"
type="file"
key="#dir#files\#qFiles.name#">
<cfset sleep(1000+qFiles.currentRow)>
</cfoutput>
This mostly worked but would still at some point again get the maxWarmingSearchers error. I ended up having to also log the files indexed and restart the process from the last file added (along with computation to get the sleep long enough). Does temporarily setting useColdSearcher to true in the solrconfig.xml help, and is there some back door way to set that attribute in the cfindex tag or do I have to set it manually and then set it back?
You probably want to pay more attention to your auto commit settings, as well as adjusting the commit settings of the updates themselves. Unless you're specifying settings in the solr config to "warm" the cache, using a cold cache will buy you nothing.
From the comments:
It doesn't sound like this will help you. You can increase maxWarmingSearchers, but most likely you need to change how often you're doing commits.
Also, keep in mind that only soft commits always a new searcher, hard commits don't necessarily. From the comments for auto commit:
In your case, I'd recommend setting openSearcher to false if you're using autoCommit and tune the spawning of new searchers by playing with commitWithin when making the update request.