ElasticSearch Recovery


(Prasad) #1

Hi ES Team,
I had a problem with lot of my Shards being Unassigned and the health is always showing around 60-70% and in RED state.
I plan to copy/restore data from backup, and it will restore the data but will it fix the shard issue, or will the master reallocate fresh within the cluster? If i copy it, will it copy the shards as well?
Please clarify?

Thanks, Prasad


(Jymit Singh Khondhu) #2

Do you have an overallocation of shards? what is your total shard count to index and cluster size?
If your shards are failing whilst they are being snapshot this will not aid in restoration.


(Prasad) #3

Hi Jymit,
Thanks for the quick response. Pls find the details below:

Actually i am getting lot of unassigned shards and want to delete them.
Is it possible to delete the unassigned shards without any issue like loosing data and attain 100%?
If yes can you pls let me know the query to run?

Thanks, Prasad


(Jymit Singh Khondhu) #4

Hi, have a read of this recent and great in explanation blog post on shard allocation and unassignment from both primaries and replicas.

From the above excerpt you shared. You have too many shards on a two node cluster. You need to review how many indices you really need and look at moving away from creation of indices with large (perhaps default) shard parameters. https://www.elastic.co/guide/en/elasticsearch/reference/current/_basic_concepts.html#getting-started-shards-and-replicas

It depends on what and how you are looking to use elasticsearch to say what sort of parameters you should have in place.


(Prasad) #5

Ok Thanks Jymit. I will have a look and will get back to you if any issues?

Regards, Prasad


(Mark Walkom) #6

You have way too many shards for that sized cluster, you need to reduce your count.


(Prasad) #7

Thanks Mark for the suggestion. Can you send me the command to run for reducing the count if it is handy?
Secondly i used to have extra 1 reserved and 3 spot instances in the ELK Cluster before, but still has the same affect without change(health status RED and 60%-70%) so thats the reason i terminated the spot instances and 1 reserved instance and using only 2 reserved instances in the cluster.
Correct me if i am wrong anywhere?
Thanks, Prasad


(Mark Walkom) #8

If you are on 5.X, use the _shrink API, otherwise you need to reindex.


(Prasad) #9

Mark, Actually I got more than 700 indices, so can i apply the _shrink API for all at the same time?
Secondly, If i go for restoring option from my old snapshots than _shrink process would it be a better option?

Regards, Prasad


(Jymit Singh Khondhu) #10

@prasad7
What would be the understanding behind restoring then shrinking. Would that not still be the same dataset?

Please have a look at our shrink API documentation (https://www.elastic.co/guide/en/elasticsearch/reference/5.1/indices-shrink-index.html#indices-shrink-index) to best understand what you are committing to. You are shrinking at the shard level opposed to index.


(Prasad) #11

Hi Jymit,

Actually i got around 350+ indices?
Please let me know if i can _shrink all indices at the same time with one API query?
Secondly, Can I restore snapshots between dates?If yes please point me to the doc link.

Thanks, Prasad


(Mark Walkom) #12

Don't shrink them all at once, there is IO overhead to doing this action. Do them in stages.


(system) #13

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.