How to log only error message from logstash container

I am running logstash 6.2.4 as docker container. Use default log4j2.properties setting comes with container. But on the host, I am seeing every parsed field in the container log file /var/lib/docker/containers//-json.log.

{"log":" "received_at" =\u003e "2018-09-21T14:14:29.197Z",\n","stream":"stdout","time":"2018-09-21T14:14:37.221754844Z"}
...

Since my logstash is reading large log file(50GB/per day), if I can limit logstash to only log error message, it may improve its performance. Any suggestion?

log4j2.properties:

status = error
name = LogstashPropertiesConfig

appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n

appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true

rootLogger.level = ${sys:ls.log.level}
rootLogger.appenderRef.console.ref = ${sys:ls.log.format}_console

logstash container:

logging:
  driver: "json-file"
  options:
    max-size: "100m"
    max-file: "10"

Thanks,

1 Like

What output plugins are you using?

it output to elasticsearch container.
Thanks.

Is that really the only output? No stdout output in a config file somewhere?

Magnus,
Thanks for the tip. Since my logstash conf file has a different name, I forgot there is a default logstash.conf under /usr/share/logstash/pipeline. It use stdout under output.
I created a empty logstash conf on the host, then mount it like my real logstash conf file. Now those log message are gone. But I am getting "Elasticsearch Unreachable" error every couple of seconds.

{"log":"[2018-09-22T19:56:22,610][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketTimeout] Read timed out {:url=\u003ehttp://elasticsearch:9200/, :error_message=\u003e"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketTimeout] Read timed out", :error_class=\u003e"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}\n","stream":"stdout","time":"2018-09-22T19:56:22.611169977Z"}
{"log":"[2018-09-22T19:56:22,611][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=\u003e"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketTimeout] Read timed out", :class=\u003e"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=\u003e64}\n","stream":"stdout","time":"2018-09-22T19:56:22.611261814Z"}
{"log":"[2018-09-22T19:56:26,945][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=\u003ehttp://elasticsearch:9200/, :path=\u003e"/"}\n","stream":"stdout","time":"2018-09-22T19:56:26.946223954Z"}
{"log":"[2018-09-22T19:56:27,932][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=\u003e"http://elasticsearch:9200/"}\n","stream":"stdout","time":"2018-09-22T19:56:27.933186825Z"}

Let me know if you would like to have a new topic for above error.

After increase JVM heap size of ES nodes, so far I haven't seen above error anymore. But logstash showed a lot of "retrying failed action with response code: 429". I guess I need to improve performance of ES nodes or add more ES nodes.
Thanks for the help. Please close this topic.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.