Logstash can't seem to connect after years or working fine

Hi, I have logstash 6.4.2 running inside docker container.

The logstash instance was working fine for over a year. Just last few hours the below errors started hapening... I loged into the container itself and I pigned and CURLed all the nodes and its connecting fine. Also the Elastic cluster seems 100% healthy.

[2020-01-22T15:35:29,062][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=logstash] Setting newly assigned partitions [app-logs-12]
[2020-01-22T15:35:29,071][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-2, groupId=logstash] Setting newly assigned partitions [app-logs-10, app-logs-11]
[2020-01-22T15:35:29,072][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-1, groupId=logstash] Setting newly assigned partitions [app-logs-6, app-logs-7]
[2020-01-22T15:35:31,173][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>500, :url=>"http://XXXXXX-0001.my.domain:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s"}
[2020-01-22T15:35:44,657][WARN ][logstash.filters.json    ] Error parsing json {:source=>"log_message", :raw=>"java.io.IOException: Connection reset by peer", :exception=>#<LogStash::Json::ParserError: Unrecognized token 'java': was expecting ('true', 'false' or 'null')
 at [Source: (byte[])"java.io.IOException: Connection reset by peer"; line: 1, column: 6]>}
[2020-01-22T15:35:47,358][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>500, :url=>"http://XXXXXX-0001.my.domain:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s"}
[2020-01-22T15:36:19,405][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>500, :url=>"http://XXXXXX-0001.my.domain:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s"}
[2020-01-22T15:36:37,565][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://XXXXXX-0002.my.domain:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://XXXXXX-0002.my.domain:9200/, :error_message=>"Elasticsearch Unreachable: [http://XXXXXX-0002.my.domain:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2020-01-22T15:36:37,592][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://XXXXXX-0002.my.domain:9200/, :path=>"/"}
[2020-01-22T15:36:37,603][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://XXXXXX-0002.my.domain:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2020-01-22T15:36:37,625][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://XXXXXX-0002.my.domain:9200/"}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.