After I restart ES, some shard remain unassigned


(Kramer Li) #1

Hi

I have a elasticsearch service running on one node. After I restart the ES. Some shard remain unassigned.

[sflow@ES01 bin]$ curl -XGET 'ES01:9200/_cat/shards?pretty'
sflow_51452355200 3 p STARTED      1000660 414.3mb 10.79.148.184 ES01 
sflow_51452355200 4 p STARTED      1000123 414.2mb 10.79.148.184 ES01 
sflow_51452355200 1 p STARTED      1000430 414.3mb 10.79.148.184 ES01 
sflow_51452355200 2 p INITIALIZING                 10.79.148.184 ES01 
sflow_51452355200 0 p STARTED       997800 413.3mb 10.79.148.184 ES01 

Below are the logs. I removed some line to make it short

started
[sflow_51452355200][4] loaded data path [/opt/data/es_data_2_2/sflow/nodes/0/indices/sflow_51452355200/4], state path [/opt/data/es_data_2_2/sflow/nodes/0/indices/sflow_51452355200/4]
[sflow_51452355200][2] loaded data path [/opt/data/es_data_2_2/sflow/nodes/0/indices/sflow_51452355200/2], state path [/opt/data/es_data_2_2/sflow/nodes/0/indices/sflow_51452355200/2]
[sflow_51452355200][1] loaded data path [/opt/data/es_data_2_2/sflow/nodes/0/indices/sflow_51452355200/1], state path [/opt/data/es_data_2_2/sflow/nodes/0/indices/sflow_51452355200/1]
[sflow_51452355200][3] loaded data path [/opt/data/es_data_2_2/sflow/nodes/0/indices/sflow_51452355200/3], state path [/opt/data/es_data_2_2/sflow/nodes/0/indices/sflow_51452355200/3]
[sflow_51452355200][0] loaded data path [/opt/data/es_data_2_2/sflow/nodes/0/indices/sflow_51452355200/0], state path [/opt/data/es_data_2_2/sflow/nodes/0/indices/sflow_51452355200/0]
[sflow_51452355200][4] shard state info found: [version [6], primary [true]]
[sflow_51452355200][2] shard state info found: [version [2], primary [true]]
[sflow_51452355200][1] shard state info found: [version [6], primary [true]]
[sflow_51452355200][3] shard state info found: [version [6], primary [true]]
[sflow_51452355200][0] shard state info found: [version [6], primary [true]]
recovered [1] indices into cluster_state
[sflow_51452355200][2] found 1 allocations of [sflow_51452355200][2], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]], highest version: [2]
[sflow_51452355200][2]: allocating [[sflow_51452355200][2], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]]] to [{ES01}{LE2YVQWZQG-adfV3akShKA}{10.79.148.184}{10.79.148.184:9300}] on primary allocation
[sflow_51452355200][3] found 1 allocations of [sflow_51452355200][3], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]], highest version: [6]
[sflow_51452355200][3]: allocating [[sflow_51452355200][3], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]]] to [{ES01}{LE2YVQWZQG-adfV3akShKA}{10.79.148.184}{10.79.148.184:9300}] on primary allocation
[sflow_51452355200][0] found 1 allocations of [sflow_51452355200][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]], highest version: [6]
[sflow_51452355200][0]: allocating [[sflow_51452355200][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]]] to [{ES01}{LE2YVQWZQG-adfV3akShKA}{10.79.148.184}{10.79.148.184:9300}] on primary allocation
[sflow_51452355200][1] found 1 allocations of [sflow_51452355200][1], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]], highest version: [6]
[sflow_51452355200][1]: allocating [[sflow_51452355200][1], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]]] to [{ES01}{LE2YVQWZQG-adfV3akShKA}{10.79.148.184}{10.79.148.184:9300}] on primary allocation
[sflow_51452355200][4] found 1 allocations of [sflow_51452355200][4], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]], highest version: [6]
[sflow_51452355200][4]: throttling allocation [[sflow_51452355200][4], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]]] to [[{ES01}{LE2YVQWZQG-adfV3akShKA}{10.79.148.184}{10.79.148.184:9300}]] on primary allocation
[sflow_51452355200][4] found 1 allocations of [sflow_51452355200][4], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]], highest version: [6]
[sflow_51452355200][4]: allocating [[sflow_51452355200][4], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-03-15T09:36:19.018Z]]] to [{ES01}{LE2YVQWZQG-adfV3akShKA}{10.79.148.184}{10.79.148.184:9300}] on primary allocation

(Mark Walkom) #2

What does _cat/recovery show?


(Kramer Li) #3

Hi Warkolm

I think it is no longer a problem now.
The situation only happen when I import a lot of data into elasticsearch.

After index a lot data into es, the es shards will remain in a state (we can call it state unmerged I think) for a while.

During that time if I kill the es and restart the es, the shards will need some time to recovery.

So it will back to ok after a few min


(Nik Everett) #4

Transaction log replay, probably.


(system) #5