when i process multiple type of logs by grok parser ,if the one of logline is not matched with the filter parser ,then i am excepting that it should be come grok parser faliure ,but it comes grok timeout warning in Logstash logs.
if its coming grok timeout why it takes so longer time?can i reduce the process time ,then its return warnings in logs.
wrong fields type for instance too often GREEDYDATA is used - no optimization
use multi match patterns instead of optional fields or OR fields
not measuring data performances, by node/pipeline stat
using grok instead dissect, csv, kv ...
poor memory/CPU resources
too many log fields
I would check by this order. Sometimes is better to go for another plugin, if is possible. The default timeout is 30 sec, however anything above 3-5 sec in case of plain lines is a signal for the optimization.
In your case, it's hard to say without sample messageS and grok patternS which you use.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.