Logstash bulk request to ElasticSearch error - may be due to old indexes

If you get these errors in LogStash :
Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
Followed by:
Restored connection to ES instance {:url=>"http://localhost:9200/"
It may be due LogStash trying to send old data to ES but the index is missing or closed. Check whether you have timestamp based index settings.
We had this issue and managed to resolve it by filtering out old messages using ruby filter plugin.

ruby {
  init => "require 'time'"
  code => 'if LogStash::Timestamp.new(event.get("@timestamp")+432000) < ( LogStash::Timestamp.now)
    event.cancel
  end'
}
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.