We use next configuration for our application Spring boot + logstash LogstashTcpSocketAppender (described here https://github.com/logstash/logstash-logback-encoder/blob/master/README.md#tcp-appenders)
Briefly said, it is async appender based on Lmax architecture with default buffer size ~ 8000 log entries, In case of error on logstash all the files will be stored in that buffer. Imagine you have long messages with 1Mb, then it will need 8GB heap, what is nonsense. I currently have options in my mind: 1) use sync socket appender, but then logging will slow down applicaton, and in case of error applicaton will stop responding 2) limit buffer size, but then messages can be lost It seems to be rather common case, so What are the strategies to handle such cases? Maybe something like dump in to the some file and then reprocess?
We decided to use Cloudwatch driver for Docker images to store logs in AWS Cloudwatch.
Then to read this logs with logstash-cloudwatch plugin and send it to ES. In such way, there is no impact on the application as it doesn't send logs by TCP and don't need to have additional buffer for sending