I have a logstash
pipeline for the most part, parses data sufficiently into JSON to go to elasticsearch
; however... Sometimes it does not parse fields very well, and finds keys and values which are very weird, most of the time, these keys are very long, such as: aaaakgaaaaiaaaaaaaaaabiofy1mb...
.
I was hoping to be able to remove these fields based on how long the key is in the kv
. Say any field over 30 chars is removed. Although I may improve the logstash
parsing in the futurethat these problems won't persist, for now I'd like to have something such as this as a last stitched sanity check.
Try this filter whith 2 steps :
prefix kv field to be able to identify them (optional here)
loop on this prefixed field to apply the condition (here the limit is 30 prefix included)