Logstash 7.1.1 running on Amazon Linux 2
jdbc_static filter plugin version 1.0.6
Elasticsearch 7.1.1
I have a test stream that produces about 125-150 messages per second, which I enrich using the jdbc_static filter plugin.
This is a logging use case, where the log events are not uniform; that is, not every event will contain a field that would be looked up in the database.
Thus, there are many messages sent to the log file of this form (anonymized):
[2019-12-30T21:19:38,304][WARN ][logstash.filters.jdbc.lookup] Parameter field not found in event {:lookup_id=>"local-table1", :invalid_parameters=>["[fields][rsid]"]}
[2019-12-30T21:19:38,304][WARN ][logstash.filters.jdbc.lookup] Parameter field not found in event {:lookup_id=>"local-table2", :invalid_parameters=>["[fields][ssid]"]}
Since this produces a lot of noise in the logging and consumes more disk space and I/O time than needed, I first dynamically disabled these messages using the script:
curl -XPUT 'localhost:9600/_node/logging?pretty' -H 'Content-Type: application/json' -d'
{
"logger.logstash.filters.jdbc.lookup" : "ERROR"
}
'
This worked as advertised; Logstash continued to run, log message volume was reduced accordingly,
Then I attempted to use the log4j2.properties file to achieve the same result at startup.
I added this line:
logger.logstash.filters.jdbc.lookup = error
then:
sudo systemctl restart logstash
Logstash ceased producing log output (not even a signon message); the logstash process was running but consumed about 20x the CPU time previously observed in steady state.
Commenting out that single line in the log4j.properties file and restarting logstash restored it to health again.
I'd appreciate any hints on how I might narrow this down further.
(edit: changed log4j.properties to log4j2.properties)