Data lost after full cluster restart


(vpunski) #1

10 nodes, replication factor 3, local storage, 10 data nodes, 10 super
clients.
Please let me know if you need more info.

Health status:
{
"cluster_name" : "CMWELL_INDEX_PRODUCTION_CLUSTER",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 20,
"number_of_data_nodes" : 10,
"active_primary_shards" : 9,
"active_shards" : 36,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4
}
State status:
"7" : [ {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
} ],


(Shay Banon) #2

Can you gist your config? Also, if you set: gateway.local to TRACE, it will
print all the allocation information (on the elected master) and it can
possibly give us info as to why those shards are not allocated.

On Wed, Jul 20, 2011 at 2:53 PM, vadim vpunski@gmail.com wrote:

10 nodes, replication factor 3, local storage, 10 data nodes, 10 super
clients.
Please let me know if you need more info.

Health status:
{
"cluster_name" : "CMWELL_INDEX_PRODUCTION_CLUSTER",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 20,
"number_of_data_nodes" : 10,
"active_primary_shards" : 9,
"active_shards" : 36,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4
}
State status:
"7" : [ {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
} ],


(vpunski) #3

From tracing gateway.local, the only message related bad 7-th shard is
repeated many times:
.
.
[2011-07-21 10:53:53,745][INFO ][cluster.service ] [Delphi]
new_master [Delphi][apBvicCZT4mzWZrm6wZ16Q][inet[/10.11.40.238:9300]],
reason: zen-disco-join (elected_as_master)
.
.
[2011-07-21 10:59:57,551][DEBUG][gateway.local ] [Delphi]
[fs][7]: not allocating, number_of_allocated_shards_found [2],
required_number [3]

My config is:


cluster.name : MY_CLUSTER

gateway:
recover_after_nodes: 8
recover_after_time: 5m
expected_nodes: 10

index.compound_format : false
index.refresh_interval : 10s
index.term_index_interval: 30

discovery.zen.ping.unicast:
hosts: node01:9300,node02:9300,node03:9300


On Jul 20, 7:47 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Can you gist your config? Also, if you set: gateway.local to TRACE, it will
print all the allocation information (on the elected master) and it can
possibly give us info as to why those shards are not allocated.

On Wed, Jul 20, 2011 at 2:53 PM, vadim vpun...@gmail.com wrote:

10 nodes, replication factor 3, local storage, 10 data nodes, 10 super
clients.
Please let me know if you need more info.

Health status:
{
"cluster_name" : "CMWELL_INDEX_PRODUCTION_CLUSTER",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 20,
"number_of_data_nodes" : 10,
"active_primary_shards" : 9,
"active_shards" : 36,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4
}
State status:
"7" : [ {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
} ],


(Shay Banon) #4

It seems like the gateway only finds 2 shards (out of 4) for that shard
group. By default, it wants to find a quorum, you can change that by
settings gateway.local.initial_shards to a different value than quorum,
for example: 2, and then it will recover.

Another question is why there are only 2. When you created the index, was it
create and all shards were allocated before the cluster was restarted?

On Thu, Jul 21, 2011 at 11:11 AM, vadim vpunski@gmail.com wrote:

From tracing gateway.local, the only message related bad 7-th shard is
repeated many times:
.
.
[2011-07-21 10:53:53,745][INFO ][cluster.service ] [Delphi]
new_master [Delphi][apBvicCZT4mzWZrm6wZ16Q][inet[/10.11.40.238:9300]],
reason: zen-disco-join (elected_as_master)
.
.
[2011-07-21 10:59:57,551][DEBUG][gateway.local ] [Delphi]
[fs][7]: not allocating, number_of_allocated_shards_found [2],
required_number [3]

My config is:


cluster.name : MY_CLUSTER

gateway:
recover_after_nodes: 8
recover_after_time: 5m
expected_nodes: 10

index.compound_format : false
index.refresh_interval : 10s
index.term_index_interval: 30

discovery.zen.ping.unicast:
hosts: node01:9300,node02:9300,node03:9300


On Jul 20, 7:47 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Can you gist your config? Also, if you set: gateway.local to TRACE, it
will
print all the allocation information (on the elected master) and it can
possibly give us info as to why those shards are not allocated.

On Wed, Jul 20, 2011 at 2:53 PM, vadim vpun...@gmail.com wrote:

10 nodes, replication factor 3, local storage, 10 data nodes, 10 super
clients.
Please let me know if you need more info.

Health status:
{
"cluster_name" : "CMWELL_INDEX_PRODUCTION_CLUSTER",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 20,
"number_of_data_nodes" : 10,
"active_primary_shards" : 9,
"active_shards" : 36,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4
}
State status:
"7" : [ {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
} ],


(Michel Conrad) #5

Could it be that you managed to start two instances of elasticsearch
on one of your servers?
In that case the 2 shards missing could have been allocated to the
second running instance,
which would explain elasticsearch couldn't find the shards after
restarting the cluster (starting only
one instance of es on every server).

By looking at your data directory in the folder nodes there should
only be a directory called 0. If there
are multiple directories 0,1,2... you have been starting multiple
nodes on a server and the missing shards
may have been allocated to another node on the same server.

On Thu, Jul 21, 2011 at 7:56 PM, Shay Banon
shay.banon@elasticsearch.com wrote:

It seems like the gateway only finds 2 shards (out of 4) for that shard
group. By default, it wants to find a quorum, you can change that by
settings gateway.local.initial_shards to a different value than quorum,
for example: 2, and then it will recover.
Another question is why there are only 2. When you created the index, was it
create and all shards were allocated before the cluster was restarted?

On Thu, Jul 21, 2011 at 11:11 AM, vadim vpunski@gmail.com wrote:

From tracing gateway.local, the only message related bad 7-th shard is
repeated many times:
.
.
[2011-07-21 10:53:53,745][INFO ][cluster.service ] [Delphi]
new_master [Delphi][apBvicCZT4mzWZrm6wZ16Q][inet[/10.11.40.238:9300]],
reason: zen-disco-join (elected_as_master)
.
.
[2011-07-21 10:59:57,551][DEBUG][gateway.local ] [Delphi]
[fs][7]: not allocating, number_of_allocated_shards_found [2],
required_number [3]

My config is:


cluster.name : MY_CLUSTER

gateway:
recover_after_nodes: 8
recover_after_time: 5m
expected_nodes: 10

index.compound_format : false
index.refresh_interval : 10s
index.term_index_interval: 30

discovery.zen.ping.unicast:
hosts: node01:9300,node02:9300,node03:9300


On Jul 20, 7:47 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Can you gist your config? Also, if you set: gateway.local to TRACE, it
will
print all the allocation information (on the elected master) and it can
possibly give us info as to why those shards are not allocated.

On Wed, Jul 20, 2011 at 2:53 PM, vadim vpun...@gmail.com wrote:

10 nodes, replication factor 3, local storage, 10 data nodes, 10 super
clients.
Please let me know if you need more info.

Health status:
{
"cluster_name" : "CMWELL_INDEX_PRODUCTION_CLUSTER",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 20,
"number_of_data_nodes" : 10,
"active_primary_shards" : 9,
"active_shards" : 36,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4
}
State status:
"7" : [ {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
} ],


(vpunski) #6

No, there is no multiple instances running on the same node...
Setting gateway.local.initial_shards=2 solved the problem.

In order to summarise, several questions remain:

  1. Is it possible to see the real reason why 2 of 4 shards wasn't
    detected during system start? (checksum, bad file length, io error,
    etc...?)
  2. Does it mean that only 1 node may have "bad" shard during system
    start (using default configuration and replication factor 3 for [fs]
    index). In case there are more than one, the cluster will never be
    "green"?
  3. Should this parameter be set together with replication_factor of
    the index, order to configure not only "runtime recovery", but also
    "system start up recovery" ?
  4. In case several indexes exist in the system with different
    replication factors, do we need initial_shards parameter configured
    for each one separately, and current system wide parameter may be
    problematic in case replication_factor + 1 < initial_shards ?

On Jul 22, 11:21 am, Michel Conrad michel.con...@trendiction.com
wrote:

Could it be that you managed to start two instances of elasticsearch
on one of your servers?
In that case the 2 shards missing could have been allocated to the
second running instance,
which would explain elasticsearch couldn't find the shards after
restarting the cluster (starting only
one instance of es on every server).

By looking at your data directory in the folder nodes there should
only be a directory called 0. If there
are multiple directories 0,1,2... you have been starting multiple
nodes on a server and the missing shards
may have been allocated to another node on the same server.

On Thu, Jul 21, 2011 at 7:56 PM, Shay Banon

shay.ba...@elasticsearch.com wrote:

It seems like the gateway only finds 2 shards (out of 4) for that shard
group. By default, it wants to find a quorum, you can change that by
settings gateway.local.initial_shards to a different value than quorum,
for example: 2, and then it will recover.
Another question is why there are only 2. When you created the index, was it
create and all shards were allocated before the cluster was restarted?

On Thu, Jul 21, 2011 at 11:11 AM, vadim vpun...@gmail.com wrote:

From tracing gateway.local, the only message related bad 7-th shard is
repeated many times:
.
.
[2011-07-21 10:53:53,745][INFO ][cluster.service ] [Delphi]
new_master [Delphi][apBvicCZT4mzWZrm6wZ16Q][inet[/10.11.40.238:9300]],
reason: zen-disco-join (elected_as_master)
.
.
[2011-07-21 10:59:57,551][DEBUG][gateway.local ] [Delphi]
[fs][7]: not allocating, number_of_allocated_shards_found [2],
required_number [3]

My config is:


cluster.name : MY_CLUSTER

gateway:
recover_after_nodes: 8
recover_after_time: 5m
expected_nodes: 10

index.compound_format : false
index.refresh_interval : 10s
index.term_index_interval: 30

discovery.zen.ping.unicast:
hosts: node01:9300,node02:9300,node03:9300


On Jul 20, 7:47 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Can you gist your config? Also, if you set: gateway.local to TRACE, it
will
print all the allocation information (on the elected master) and it can
possibly give us info as to why those shards are not allocated.

On Wed, Jul 20, 2011 at 2:53 PM, vadim vpun...@gmail.com wrote:

10 nodes, replication factor 3, local storage, 10 data nodes, 10 super
clients.
Please let me know if you need more info.

Health status:
{
"cluster_name" : "CMWELL_INDEX_PRODUCTION_CLUSTER",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 20,
"number_of_data_nodes" : 10,
"active_primary_shards" : 9,
"active_shards" : 36,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4
}
State status:
"7" : [ {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
} ],


(Shay Banon) #7

On Sun, Jul 24, 2011 at 10:33 AM, vadim vpunski@gmail.com wrote:

No, there is no multiple instances running on the same node...
Setting gateway.local.initial_shards=2 solved the problem.

In order to summarise, several questions remain:

  1. Is it possible to see the real reason why 2 of 4 shards wasn't
    detected during system start? (checksum, bad file length, io error,
    etc...?)

It should be in the trace logging of the master node when it does full
recovery (at least where it found which shards).

  1. Does it mean that only 1 node may have "bad" shard during system
    start (using default configuration and replication factor 3 for [fs]
    index). In case there are more than one, the cluster will never be
    "green"?

I don't know what you mean by replication factor, and I don't want to
confuse people. For an index with "number_of_replicas" set to 2 (meaning an
"additional" 2 replicas per shard), and the default quorum size shards to
exists in order to recover, then yes, a quorum of 3 (a shard and 2 replicas)
is 2.

  1. Should this parameter be set together with replication_factor of
    the index, order to configure not only "runtime recovery", but also
    "system start up recovery" ?

Quorum should be good enough, unless you are after something different? I
can add a naming convention for "quorum-1", which can simplify things.

  1. In case several indexes exist in the system with different
    replication factors, do we need initial_shards parameter configured
    for each one separately, and current system wide parameter may be
    problematic in case replication_factor + 1 < initial_shards ?

Explicit value for initial_shards can be problematic, yes, for cases where
indices have different number of replicas. We can make it an index level
settings as well, though I think "quorum-1" is good enough.

On Jul 22, 11:21 am, Michel Conrad michel.con...@trendiction.com
wrote:

Could it be that you managed to start two instances of elasticsearch
on one of your servers?
In that case the 2 shards missing could have been allocated to the
second running instance,
which would explain elasticsearch couldn't find the shards after
restarting the cluster (starting only
one instance of es on every server).

By looking at your data directory in the folder nodes there should
only be a directory called 0. If there
are multiple directories 0,1,2... you have been starting multiple
nodes on a server and the missing shards
may have been allocated to another node on the same server.

On Thu, Jul 21, 2011 at 7:56 PM, Shay Banon

shay.ba...@elasticsearch.com wrote:

It seems like the gateway only finds 2 shards (out of 4) for that shard
group. By default, it wants to find a quorum, you can change that by
settings gateway.local.initial_shards to a different value than
quorum,

for example: 2, and then it will recover.
Another question is why there are only 2. When you created the index,
was it

create and all shards were allocated before the cluster was restarted?

On Thu, Jul 21, 2011 at 11:11 AM, vadim vpun...@gmail.com wrote:

From tracing gateway.local, the only message related bad 7-th shard is
repeated many times:
.
.
[2011-07-21 10:53:53,745][INFO ][cluster.service ] [Delphi]
new_master [Delphi][apBvicCZT4mzWZrm6wZ16Q][inet[/10.11.40.238:9300
]],

reason: zen-disco-join (elected_as_master)
.
.
[2011-07-21 10:59:57,551][DEBUG][gateway.local ] [Delphi]
[fs][7]: not allocating, number_of_allocated_shards_found [2],
required_number [3]

My config is:


cluster.name : MY_CLUSTER

gateway:
recover_after_nodes: 8
recover_after_time: 5m
expected_nodes: 10

index.compound_format : false
index.refresh_interval : 10s
index.term_index_interval: 30

discovery.zen.ping.unicast:
hosts: node01:9300,node02:9300,node03:9300


On Jul 20, 7:47 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Can you gist your config? Also, if you set: gateway.local to TRACE,
it

will
print all the allocation information (on the elected master) and it
can

possibly give us info as to why those shards are not allocated.

On Wed, Jul 20, 2011 at 2:53 PM, vadim vpun...@gmail.com wrote:

10 nodes, replication factor 3, local storage, 10 data nodes, 10
super

clients.
Please let me know if you need more info.

Health status:
{
"cluster_name" : "CMWELL_INDEX_PRODUCTION_CLUSTER",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 20,
"number_of_data_nodes" : 10,
"active_primary_shards" : 9,
"active_shards" : 36,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4
}
State status:
"7" : [ {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
} ],


(Shay Banon) #8

Here are the issues that spawned out of this discussion:

Simple to implement, will be in 0.17.2.

On Tue, Jul 26, 2011 at 9:16 AM, Shay Banon shay.banon@elasticsearch.comwrote:

On Sun, Jul 24, 2011 at 10:33 AM, vadim vpunski@gmail.com wrote:

No, there is no multiple instances running on the same node...
Setting gateway.local.initial_shards=2 solved the problem.

In order to summarise, several questions remain:

  1. Is it possible to see the real reason why 2 of 4 shards wasn't
    detected during system start? (checksum, bad file length, io error,
    etc...?)

It should be in the trace logging of the master node when it does full
recovery (at least where it found which shards).

  1. Does it mean that only 1 node may have "bad" shard during system
    start (using default configuration and replication factor 3 for [fs]
    index). In case there are more than one, the cluster will never be
    "green"?

I don't know what you mean by replication factor, and I don't want to
confuse people. For an index with "number_of_replicas" set to 2 (meaning an
"additional" 2 replicas per shard), and the default quorum size shards to
exists in order to recover, then yes, a quorum of 3 (a shard and 2 replicas)
is 2.

  1. Should this parameter be set together with replication_factor of
    the index, order to configure not only "runtime recovery", but also
    "system start up recovery" ?

Quorum should be good enough, unless you are after something different? I
can add a naming convention for "quorum-1", which can simplify things.

  1. In case several indexes exist in the system with different
    replication factors, do we need initial_shards parameter configured
    for each one separately, and current system wide parameter may be
    problematic in case replication_factor + 1 < initial_shards ?

Explicit value for initial_shards can be problematic, yes, for cases where
indices have different number of replicas. We can make it an index level
settings as well, though I think "quorum-1" is good enough.

On Jul 22, 11:21 am, Michel Conrad michel.con...@trendiction.com
wrote:

Could it be that you managed to start two instances of elasticsearch
on one of your servers?
In that case the 2 shards missing could have been allocated to the
second running instance,
which would explain elasticsearch couldn't find the shards after
restarting the cluster (starting only
one instance of es on every server).

By looking at your data directory in the folder nodes there should
only be a directory called 0. If there
are multiple directories 0,1,2... you have been starting multiple
nodes on a server and the missing shards
may have been allocated to another node on the same server.

On Thu, Jul 21, 2011 at 7:56 PM, Shay Banon

shay.ba...@elasticsearch.com wrote:

It seems like the gateway only finds 2 shards (out of 4) for that
shard

group. By default, it wants to find a quorum, you can change that by
settings gateway.local.initial_shards to a different value than
quorum,

for example: 2, and then it will recover.
Another question is why there are only 2. When you created the index,
was it

create and all shards were allocated before the cluster was restarted?

On Thu, Jul 21, 2011 at 11:11 AM, vadim vpun...@gmail.com wrote:

From tracing gateway.local, the only message related bad 7-th shard
is

repeated many times:
.
.
[2011-07-21 10:53:53,745][INFO ][cluster.service ] [Delphi]
new_master [Delphi][apBvicCZT4mzWZrm6wZ16Q][inet[/10.11.40.238:9300
]],

reason: zen-disco-join (elected_as_master)
.
.
[2011-07-21 10:59:57,551][DEBUG][gateway.local ] [Delphi]
[fs][7]: not allocating, number_of_allocated_shards_found [2],
required_number [3]

My config is:


cluster.name : MY_CLUSTER

gateway:
recover_after_nodes: 8
recover_after_time: 5m
expected_nodes: 10

index.compound_format : false
index.refresh_interval : 10s
index.term_index_interval: 30

discovery.zen.ping.unicast:
hosts: node01:9300,node02:9300,node03:9300


On Jul 20, 7:47 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Can you gist your config? Also, if you set: gateway.local to TRACE,
it

will
print all the allocation information (on the elected master) and it
can

possibly give us info as to why those shards are not allocated.

On Wed, Jul 20, 2011 at 2:53 PM, vadim vpun...@gmail.com wrote:

10 nodes, replication factor 3, local storage, 10 data nodes, 10
super

clients.
Please let me know if you need more info.

Health status:
{
"cluster_name" : "CMWELL_INDEX_PRODUCTION_CLUSTER",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 20,
"number_of_data_nodes" : 10,
"active_primary_shards" : 9,
"active_shards" : 36,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4
}
State status:
"7" : [ {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : false,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
}, {
"state" : "UNASSIGNED",
"primary" : true,
"node" : null,
"relocating_node" : null,
"shard" : 7,
"index" : "fs"
} ],


(system) #9