In every project I've been working on there's always been the issue of log files becoming too large.
A quick off the shelf solution was to use the Log4j RollingFileAppender and set the maximum size allowed.
However there are situations when the same exception happens repeatedly reaching the maximum size very quickly, before somebody manually intervenes. In that scenario because of the rolling policy you end up losing information of important events that happened just before the exception.
Can anybody suggest a fix for this issue?
P.S. Something I can think of is to hold a cache of the Exceptions happened so far, so that when the same Exception re-occurs I don't log tons of stacktrace lines. Still I think this must be a well-known issue and I don't want to reinvent the wheel.
Buy bigger hard drives and set up a batch process to automatically zip up older logs on a regular basis.
(Zip will detect the repeated exception pattern and compress it very effectively).