I am running logstash 6.2.4 as docker container. Use default log4j2.properties setting comes with container. But on the host, I am seeing every parsed field in the container log file /var/lib/docker/containers//-json.log.
Since my logstash is reading large log file(50GB/per day), if I can limit logstash to only log error message, it may improve its performance. Any suggestion?
Magnus,
Thanks for the tip. Since my logstash conf file has a different name, I forgot there is a default logstash.conf under /usr/share/logstash/pipeline. It use stdout under output.
I created a empty logstash conf on the host, then mount it like my real logstash conf file. Now those log message are gone. But I am getting "Elasticsearch Unreachable" error every couple of seconds.
{"log":"[2018-09-22T19:56:22,610][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketTimeout] Read timed out {:url=\u003ehttp://elasticsearch:9200/, :error_message=\u003e"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketTimeout] Read timed out", :error_class=\u003e"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}\n","stream":"stdout","time":"2018-09-22T19:56:22.611169977Z"}
{"log":"[2018-09-22T19:56:22,611][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=\u003e"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketTimeout] Read timed out", :class=\u003e"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=\u003e64}\n","stream":"stdout","time":"2018-09-22T19:56:22.611261814Z"}
{"log":"[2018-09-22T19:56:26,945][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=\u003ehttp://elasticsearch:9200/, :path=\u003e"/"}\n","stream":"stdout","time":"2018-09-22T19:56:26.946223954Z"}
{"log":"[2018-09-22T19:56:27,932][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=\u003e"http://elasticsearch:9200/"}\n","stream":"stdout","time":"2018-09-22T19:56:27.933186825Z"}
Let me know if you would like to have a new topic for above error.
After increase JVM heap size of ES nodes, so far I haven't seen above error anymore. But logstash showed a lot of "retrying failed action with response code: 429". I guess I need to improve performance of ES nodes or add more ES nodes.
Thanks for the help. Please close this topic.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.