I had a hardware trouble during ES was in yellow state.
Some shards that were in the troubled disk were gone.
It seems I lost some primary shards because it was in yellow state and some shards that were in the troubled node did not have its replica.
(my number_of_replicas is 1)
The shard reallocation has finished but the cluster state stays red.
{"cluster_name":"es-cluster1",
"status":"red",
"timed_out":false,
"number_of_nodes":16,
"number_of_data_nodes":16,
"active_primary_shards":2469,
"active_shards":4938,
"relocating_shards":0,
"initializing_shards":0,
"unassigned_shards":118}
It should have 2528 primary shards but (2528 - 2469 = 59) shards are missing.
I'll reindex documents belonged to the lost shards, but is there any way to put the cluster back to yellow/green state, so that at the same time I can update index and search existing documents?
I'm not sure if I understood you question correctly, but what you can do is:
delete the indices which have missing shards. Something like:
curl -XDELETE localhost:9200/corrupted_index/
reindex data belonging to those "incomplete" indices
Then your cluster state should be back to yellow/green again. Until
then, if you have indices that have missing shards but also allocated
shards, ES will still run your searches on the data you have. If
that's important to you, then you might prefer to do things like this:
reindex data belonging to incomplete indices into new indices with
different names
delete indices with missing shards
add aliases[0] to the new indices with the old index names, so that
searches will run as before
I had a hardware trouble during ES was in yellow state.
Some shards that were in the troubled disk were gone.
It seems I lost some primary shards because it was in yellow state and some
shards that were in the troubled node did not have its replica.
(my number_of_replicas is 1)
The shard reallocation has finished but the cluster state stays red.
{"cluster_name":"es-cluster1",
"status":"red",
"timed_out":false,
"number_of_nodes":16,
"number_of_data_nodes":16,
"active_primary_shards":2469,
"active_shards":4938,
"relocating_shards":0,
"initializing_shards":0,
"unassigned_shards":118}
It should have 2528 primary shards but (2528 - 2469 = 59) shards are
missing.
I'll reindex documents belonged to the lost shards, but is there any way to
put the cluster back to yellow/green state, so that at the same time I can
update index and search existing documents?
I created a dummy index, manually copied one of it's shards to the node where shards were lost, renamed shard directory names as they were, started elastic search at the node, and deleted the dummy index.
ES got back to green.
Docs in the indices remained (of course except for ones in the lost shards).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.