Logstash elastic HostUnreachableError while elastic is available

logstash (7.13.1) doesn't work without clear reason why:

  • we get "LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"
    but
  • we can curl from the same pod with the same url & credentials
  • it works in our different environments (several deployments using multiple pods in kubernetes, on several clusters connecting to several elasticsearch instances, nothing output-based is unique to this logstash deployment, but it's all 6 pods from this deployment having the issue)
  • we can even run with the same settings logstash from the same pod (using a different path.data and log level = info instead of warn for more output), and it just works

error log:

[ERROR] 2021-07-07 11:51:07.875 [[main]>worker0] elasticsearch - Attempted to send a bulk request but Elasticsearch appears to be unreachable or down {:message=>"Elasticsearch Unreachable: [snippedelasticurlwithcredentials][Mantic
ore::SocketTimeout] Read timed out", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :will_retry_in_seconds=>8}

Can this be caused by another error? The only thing setting this logstash deployment apart from the others is the kafka connection and the data inside the pipeline. Could an issue with either of those cause issues downstream & break the elastic connection? As I'm pretty certain the connection to the elastic itself is not the issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.