Unassigned shards cluster_recovered

Sorry, I don't understand the question.

so in which way i delete the node that follow the above condition

Sorry, I know that English might not be your native language and I'm trying my best to interpret what you're saying, but it's really not clear how to answer. What do you mean "delete the node"? There's quite a long thread above here so it's not clear what you mean by "the above condition". It'd be useful if you wrote a longer post to bring all the pieces of context for your question into one place, even if this repeats some of the information above.

Hello

I want to know the method to delete the node where it will not effect to indices that assign anywhere in the cluser

that happend because we remvoe the filter

sorry for the english but let me know if still you have issue

Do you mean you want to shut a node down? I don't understand why, but a node is just a normal process on your system so you shut it down like any other process: on Linux, press Ctrl-C if it's running in a console, or send it a signal like SIGTERM if it's not. On Windows, Ctrl-C if it's in a console, or stop the service if it's running as a service.

Hello

we are running in background so we can kill the process and node is shutdown that we know

but what about their data is that also delete when we shut down it ?

If all indices have at least 1 replica configured and these are allocated (which should happen once you changed your incorrect awareness settings) there will still be at least one copy of each shard in the cluster even when you shut down a node. At that point Elasticsearch will look to create another replica in the cluster to bring the total number of shards up to the target number. No data is therefore lost.

What about to shutdown the node with it's data delete ?

The shards on the node are no longer available, but as you have copies elsewhere in the cluster that does not matter. If you run without replicas or Elasticsearch is not able to allocate the configured replicas you may however experience data loss. I would therefore recommend leaving shard allocation to Elasticsearch rather than trying to control it in detail. Ensuring shards are distributed across availability zones is good, but trying to control exactly which nodes they are located on is generally not.

Hello

thanks for reply i will try to stop the control over the shard allocation and try elasticsearch will do automatically as you explain

but then why there is shard allocation features is available if it have issue ?

Shard allocation awareness and shard allocation filtering are useful tools to ensure replicas are distributed across availability zones and racks in larger clusters. It is also used to ensure replicas are not placed on the same host as the primary when you are running multiple nodes per host. It is however important to understand them and set them up correctly in order to get the correct behaviour.

If you can provide some details about your cluster and how it is deployed we might be able to help suggest settings that provide the resiliency you need.

My cluster have many nodes and each node create when new site have feature implemented

so as per you recomanded i not need shard filter right ?

I can’t tell as I do not know the structure of your cluster or the use case.

can you let me know what you want to know

specify the things then i can tell you

else general things is we have many nodes in the cluster and we are currently using the shard filtering for indeces in the node

Hello

i am creating the node like this

bash -x bin/elasticsearch -Epath.data=/usr/share/elasticsearch/data/node_16 -Epath.logs=/var/log/elasticsearch/node_16 -Enode.name=node_16 -Enode.data=true -Enode.master=false -Enode.ingest=false -Enode.attr.rack=node_16 -Enode.attr.size=big

and indices also with shard allocation as i showed you in thread

so if i need to remove the shard allocation from the indices then i also need to remove from node ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.