I did extensive searching but only found 'shutdown the node then remove
it'. We started with an ELK stack on a single box, then created a fresh ES
3-node cluster, and re-pointed Kibana/Logstash to the new cluster. CentOS
6.5, latest ES version (1.3?)
My elasticsearch.yml file only has the following modified on each node:
- Cluster name
- Node name
- Data path
- We also modified the heap size in /etc/sysconfig/elasticsearch
Everything is working 100% fine - I'm just not able to clean up the cluster
node info. Old node name: logstash-hostname.domain.com-13364-2226
Rather weird, since the old node NEVER had the new cluster name
specified...just the default 'elasticsearch'.
Not sure why, but when I view paramedic/HQ/cluster info - I see the
original 'embedded' Elasticsearch node plus my other three nodes. The
cluster sees 4 nodes, but there are no shards/etc associated with the old
node. Not seeing any API to remove/cleanup nodes from the ES cluster...any
ideas? Perhaps a full cluster shutdown?
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5e9dbc12-e93a-4e05-93f7-03462c082620%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hm, might have figured this out. I've been testing the various cluster
health plugins - one nice one is Kopf. This had an option 'shutdown
node'...so out of curiosity I clicked it. Success!? Node is gone...
So I'm thinking that if I'd run the shutdown command from one of the 'good'
cluster nodes, rather than locally on the bad node, this would have cleared
up no problem...hope this helps someone.
On Tuesday, 4 November 2014 07:02:44 UTC-5, Chris Trotter wrote:
I did extensive searching but only found 'shutdown the node then remove
it'. We started with an ELK stack on a single box, then created a fresh ES
3-node cluster, and re-pointed Kibana/Logstash to the new cluster. CentOS
6.5, latest ES version (1.3?)
My elasticsearch.yml file only has the following modified on each node:
- Cluster name
- Node name
- Data path
- We also modified the heap size in /etc/sysconfig/elasticsearch
Everything is working 100% fine - I'm just not able to clean up the
cluster node info. Old node name: logstash-hostname.domain.com-13364-2226
Rather weird, since the old node NEVER had the new cluster name
specified...just the default 'elasticsearch'.
Not sure why, but when I view paramedic/HQ/cluster info - I see the
original 'embedded' Elasticsearch node plus my other three nodes. The
cluster sees 4 nodes, but there are no shards/etc associated with the old
node. Not seeing any API to remove/cleanup nodes from the ES cluster...any
ideas? Perhaps a full cluster shutdown?
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/edbdce47-e2d5-4b96-944c-13d91d934521%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.