Decommission unreachable nodes

Hi,
i'm using elasticsearch 6.5.4, i have 3 stable nodes (master eligible) and 4 commodity hardware that are often unreachable.
Is there an automatic configuration to decommission these when unreachable?
Thanks

Could you explain more clearly what you mean by "decommission"? Unreachable nodes are already automatically removed from the cluster. What else do you need Elasticsearch to do?

Elasticsearch is designed to handle failed nodes correctly, but I don't think it's a great idea to run a cluster with nodes that are "often unreachable".

Hi David
I'm sorry I borrowed this term from the hadoop ecosystem. when a node is considered to be in bad health, it is temporarily removed from a cluster, either due to CPU overload, RAM or other problems, this is called decommissioning; when the Node is in good health the reverse process will put back into the cluster, recommissioning.
The choice of commodity hardware because I have installed elastic in the PCs of employees to increase the capacity of the cluster and by their nature these are subject to restart, failure and CPU/RAM overload.
I understand that this is not an excellent choice but in the absence of resources "Mater artium necessitas".
About my request, I thought that Elastic did not do it automatically because Logstash continued to search incessantly for a down node

[2019-05-31T17:29:54,741][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://*ip of the node:port of the node*/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>32}

At this point I believe that this insistence derives from the presence of the node in the list of the output stage

output {
  elasticsearch {
      hosts => ["list of hosts, inclused the unreachable "]
      index => "index name"
      document_type => "index name"
      action => "update"
      doc_as_upsert => "true"
      document_id => "%{doc_id}"
  }

}

I am a neophyte
Thanks a lot

I see. I don't know enough about Logstash to know if it can be configured to behave better in the presence of flaky nodes. If not, you could perhaps configure Logstash only to write to stable nodes in the Elasticsearch cluster, which would let you make more effective use of Elasticsearch's detection of missing nodes.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.