Elasticsearch after taking some logs passes the message "Attempted bulk request to the elasticsearch"

As after passing certain logs logstash is displaying a message
[2018-01-05T12:57:50,735][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://IP:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>64}
[2018-01-05T12:57:51,823][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://IP:9200/, :path=>"/"}
[2018-01-05T12:57:51,826][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#Java::JavaNet::URI:0x3b4c3c3e}
[2018-01-05T12:58:08,067][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://IP:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://IP:9200/, :error_message=>"Elasticsearch Unreachable: [http://IP:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2018-01-05T12:58:08,067][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://IP:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>64}
[2018-01-05T12:58:11,743][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://IP:9200/, :path=>"/"}
[2018-01-05T12:58:11,746][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#Java::JavaNet::URI:0x43fac989}

This is a sequence of logstash losing connection to the cluster then reconnecting repeatedly. There are a number of things that could cause this.

Can you provide your cluster layout, how many nodes, masters, network speed and other relevant information?

My cluster had this issue at one time. It was overloaded network.

Bryan Vest

I am having only one node and one cluster

If I interpret this correctly you are running Elasticsearch as a master and data node with logstash on a single server.

Is this correct?

Yes,
I am running master and data on a same node

pls help how to improve

The minimum I would recommend is a 3 node cluster with all three nodes as master and data. In my setup logstash runs on a medium power VM and processes around 100 million log lines per day without issues.

Something similar to the attached imageMinimumESLogstash

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.