Elasticsearch issue

Hi all,

I am having an issue with the Cluster. After loading 30+ million records
into the system, over the weekend one of the servers (out of 6) ran out of
disk space and ever since, I cannot seem to get it back online. Any help
would be appreciated. If anyone has any suggestions at all, any help would
be appreciated.

GET _cluster/health =

{
"cluster_name": "Cluster",
"status": "red",
"timed_out": false,
"number_of_nodes": 6,
"number_of_data_nodes": 4,
"active_primary_shards": 9,
"active_shards": 16,
"relocating_shards": 0,
"initializing_shards": 2,
"unassigned_shards": 2
}

I get the following errors on my primary server;

[2014-02-11 09:52:10,477][WARN ][indices.cluster ] [Server05]
[indexdata][1] failed to start shard
org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
[indexdata][1] failed recovery
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
Source)
at java.lang.Thread.run(Unknown Source)
Caused by: org.elasticsearch.index.engine.EngineCreationFailureException:
[indexdata][1] failed to open reader on writer
at
org.elasticsearch.index.engine.robin.RobinEngine.start(RobinEngine.java:290)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard.java:660)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:174)
... 3 more
Caused by: java.io.FileNotFoundException: _2bgr2_es090_0.blm
at
org.elasticsearch.index.store.Store$StoreDirectory.openInput(Store.java:456)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat$BloomFilteredFieldsProducer.(BloomFilterPostingsFormat.java:121)
at
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat.fieldsProducer(BloomFilterPostingsFormat.java:101)
at
org.elasticsearch.index.codec.postingsformat.ElasticSearch090PostingsFormat.fieldsProducer(ElasticSearch090PostingsFormat.java:81)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:195)
at
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:244)
at
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:133)
at
org.apache.lucene.index.SegmentReader.(SegmentReader.java:56)
at
org.apache.lucene.index.ReadersAndLiveDocs.getReader(ReadersAndLiveDocs.java:121)
at
org.apache.lucene.index.ReadersAndLiveDocs.getReadOnlyClone(ReadersAndLiveDocs.java:217)
at
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:100)
at
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:379)
at
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:111)
at
org.apache.lucene.search.SearcherManager.(SearcherManager.java:89)
at
org.elasticsearch.index.engine.robin.RobinEngine.buildSearchManager(RobinEngine.java:1505)
at
org.elasticsearch.index.engine.robin.RobinEngine.start(RobinEngine.java:280)
... 6 more

Thanks,

Chris.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/38516a5d-a88d-42b8-bd9b-82df12637b1f%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

If all your other nodes contain enough replicas of all your indexes (i.e.
you have lost no data), then you can safely take down the bad node, wipe
out whatever data is in the data directory (assuming it is local to the
node) and then join it back to the cluster. If the bad node actually
contained some primary shards with no replicas, then you're probably out of
luck and just need to delete the specific index that contains those
replicas (i.e. the index(es) that has unassigned shards that were on that
bad node) and rebuild your index.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/f02bd2f7-97e7-42ee-b0e2-7336df3090e9%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On Tuesday, February 11, 2014 12:44:02 PM UTC-8, Binh Ly wrote:

If all your other nodes contain enough replicas of all your indexes (i.e.
you have lost no data), then you can safely take down the bad node, wipe
out whatever data is in the data directory (assuming it is local to the
node) and then join it back to the cluster. If the bad node actually
contained some primary shards with no replicas, then you're probably out of
luck and just need to delete the specific index that contains those
replicas (i.e. the index(es) that has unassigned shards that were on that
bad node) and rebuild your index.

Binh,

Thank so much for your input. Wow would I determine which node is bad, and
what is the process to delete the specific index / rebuild?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/bbe0b2ff-6ec4-43d3-b8ae-2b5e012aa0dd%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On Tuesday, February 11, 2014 12:44:02 PM UTC-8, Binh Ly wrote:

If all your other nodes contain enough replicas of all your indexes (i.e.
you have lost no data), then you can safely take down the bad node, wipe
out whatever data is in the data directory (assuming it is local to the
node) and then join it back to the cluster. If the bad node actually
contained some primary shards with no replicas, then you're probably out of
luck and just need to delete the specific index that contains those
replicas (i.e. the index(es) that has unassigned shards that were on that
bad node) and rebuild your index.

Binh,

Thank so much for your input. How does one determine which node is bad, and
what is the process to delete the specific index / rebuild?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/bed4ed72-e6ea-4b1b-8937-fe06721158d9%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The bad node is the one that ran out of space.
If you have installed ES on linux using a package (deb/rpm) then the data
is usually under /var/lib/elasticsearch. Just manually delete it and then
rejoin the node.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 12 February 2014 08:09, Chris sirrube2@yahoo.com wrote:

On Tuesday, February 11, 2014 12:44:02 PM UTC-8, Binh Ly wrote:

If all your other nodes contain enough replicas of all your indexes (i.e.
you have lost no data), then you can safely take down the bad node, wipe
out whatever data is in the data directory (assuming it is local to the
node) and then join it back to the cluster. If the bad node actually
contained some primary shards with no replicas, then you're probably out of
luck and just need to delete the specific index that contains those
replicas (i.e. the index(es) that has unassigned shards that were on that
bad node) and rebuild your index.

Binh,

Thank so much for your input. How does one determine which node is bad,
and what is the process to delete the specific index / rebuild?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/bed4ed72-e6ea-4b1b-8937-fe06721158d9%40googlegroups.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624Y1vf0MUnU4h_OE9a87MWx7exak%2BEzK0juTjiTObhekOA%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thanks for the feedback Mark / Binh,

I am not sure if it is a single node that is causing the problem. Querying
_cluster/health/indexdata?level=shards gives me this response below. Is
deleting the data from the bad node, consistent when the shards are in the
state as below?

{
"cluster_name": "Cluster",
"status": "red",
"timed_out": false,
"number_of_nodes": 6,
"number_of_data_nodes": 6,
"active_primary_shards": 2,
"active_shards": 3,
"relocating_shards": 0,
"initializing_shards": 4,
"unassigned_shards": 3,
"indices": {
"indexdata": {
"status": "red",
"number_of_shards": 5,
"number_of_replicas": 1,
"active_primary_shards": 2,
"active_shards": 3,
"relocating_shards": 0,
"initializing_shards": 4,
"unassigned_shards": 3,
"shards": {
"0": {
"status": "yellow",
"primary_active": true,
"active_shards": 1,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 0
},
"1": {
"status": "red",
"primary_active": false,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 1
},
"2": {
"status": "green",
"primary_active": true,
"active_shards": 2,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0
},
"3": {
"status": "red",
"primary_active": false,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 1
},
"4": {
"status": "red",
"primary_active": false,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 1
}
}
}
}
}

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/456f0248-c094-4cde-88d9-5f771fc5a654%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Not if your cluster is in a red state, that means you have unassigned
primary shards.

What are you using to monitor things? If you're only using the API then
look at plugins like elastichq, kopf, bigdesk or marvel. They will give you
better insight into what is happening.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 12 February 2014 09:11, Chris sirrube2@yahoo.com wrote:

Thanks for the feedback Mark / Binh,

I am not sure if it is a single node that is causing the problem. Querying
_cluster/health/indexdata?level=shards gives me this response below. Is
deleting the data from the bad node, consistent when the shards are in the
state as below?

{
"cluster_name": "Cluster",
"status": "red",
"timed_out": false,
"number_of_nodes": 6,
"number_of_data_nodes": 6,
"active_primary_shards": 2,
"active_shards": 3,
"relocating_shards": 0,
"initializing_shards": 4,
"unassigned_shards": 3,
"indices": {
"indexdata": {
"status": "red",
"number_of_shards": 5,
"number_of_replicas": 1,
"active_primary_shards": 2,
"active_shards": 3,
"relocating_shards": 0,
"initializing_shards": 4,
"unassigned_shards": 3,
"shards": {
"0": {
"status": "yellow",
"primary_active": true,
"active_shards": 1,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 0
},
"1": {
"status": "red",
"primary_active": false,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 1
},
"2": {
"status": "green",
"primary_active": true,
"active_shards": 2,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0
},
"3": {
"status": "red",
"primary_active": false,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 1
},
"4": {
"status": "red",
"primary_active": false,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 1
}
}
}
}
}

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/456f0248-c094-4cde-88d9-5f771fc5a654%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624Z%2BKFqSg0UM5rZDgG%2BgM5ZBEqCaG3KfzJPp%2BSfnk6j%3Dow%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Chris,

You'll probably need to find out which node contains whichever shards that
you think are bad. If you do something like this, you can get a detailed
breakdown of which indexes has which shards on which nodes and their
corresponding shard states:

curl "localhost:9200/_cluster/state/routing_table?pretty"

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/73a94271-69a8-43fd-80b0-b3b39b9cb0a7%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I am using bigdesk and marvel that I just installed today, I am running
version 90.6 of elasticsearch and I am not getting data back from marvel. I
want to upgrade to most recent version however, I want to resolve this
issue first.

Do you know how to assign primary shards?

Thanks,

Chris.

On Tuesday, February 11, 2014 2:15:43 PM UTC-8, Mark Walkom wrote:

Not if your cluster is in a red state, that means you have unassigned
primary shards.

What are you using to monitor things? If you're only using the API then
look at plugins like elastichq, kopf, bigdesk or marvel. They will give you
better insight into what is happening.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.com

On 12 February 2014 09:11, Chris <sirr...@yahoo.com <javascript:>> wrote:

Thanks for the feedback Mark / Binh,

I am not sure if it is a single node that is causing the problem.
Querying _cluster/health/indexdata?level=shards gives me this response
below. Is deleting the data from the bad node, consistent when the shards
are in the state as below?

{
"cluster_name": "Cluster",
"status": "red",
"timed_out": false,
"number_of_nodes": 6,
"number_of_data_nodes": 6,
"active_primary_shards": 2,
"active_shards": 3,
"relocating_shards": 0,
"initializing_shards": 4,
"unassigned_shards": 3,
"indices": {
"indexdata": {
"status": "red",
"number_of_shards": 5,
"number_of_replicas": 1,
"active_primary_shards": 2,
"active_shards": 3,
"relocating_shards": 0,
"initializing_shards": 4,
"unassigned_shards": 3,
"shards": {
"0": {
"status": "yellow",
"primary_active": true,
"active_shards": 1,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 0
},
"1": {
"status": "red",
"primary_active": false,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 1
},
"2": {
"status": "green",
"primary_active": true,
"active_shards": 2,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0
},
"3": {
"status": "red",
"primary_active": false,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 1
},
"4": {
"status": "red",
"primary_active": false,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 1,
"unassigned_shards": 1
}
}
}
}
}

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/456f0248-c094-4cde-88d9-5f771fc5a654%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/3db3b310-2781-41de-929e-d68278a8a86e%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Binh,

That command did not seem to work. I am running version 90.6, is that
supported in this version?

$ curl http://server:9200/_cluster/state/routing_table?pretty
{
"error" : "IndexMissingException[[_cluster] missing]",
"status" : 404
}

Thanks,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1bd73b1f-12c3-4d24-a1ae-da70328f7719%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Chris, you're right, I'm doing it on a newer version. For your case, try:

curl "localhost:9200/_cluster/state?pretty"

You'll get a lot more info but just look under the routing_table and
routing_nodes sections for the details I mentioned before.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/3da8ea74-9e41-4a01-b3f6-86f60d13bcfd%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Forgot to mention, Marvel only works with ES 0.90.9 and later. Just FYI.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/be835235-7f82-489d-8caf-9ba7aa933706%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Binh, thanks - plan on upgrading after this issue is resolved. I ran
_cluster/state and this is the results I got back, I removed the mappings
to keep it small. I don't know what I am looking for. I see shard 4 is
initalizing and unassigned, however, I don't see any thing that I should be
looking at. If you can glean insight out of this information, much
appreciated. Chris.

{
"cluster_name": "Cluster",
"master_node": "ukbZpOk9RjGcD5PCqRXLcA",
"blocks": {},
"nodes": {
"kM1vUt6KRoydpHRZADP-yw": {
"name": "server023",
"transport_address": "inet[/11.35.194.43:9300]",
"attributes": {
"master": "false"
}
},
"bXSlI6iUR6KIQHHOid1bPw": {
"name": "server023",
"transport_address": "inet[/11.35.194.43:9300]",
"attributes": {
"master": "false"
}
},
"cJTSFsjFQ7iIotT6aapjBQ": {
"name": "server027",
"transport_address": "inet[/11.35.194.47:9300]",
"attributes": {
"master": "false"
}
},
"Ej4pX5KdTKmJ7tBwXYvN8w": {
"name": "server024",
"transport_address": "inet[/11.35.194.44:9300]",
"attributes": {
"master": "false"
}
},
"llX7KOpLTyajgLBuMn8y8g": {
"name": "server022",
"transport_address": "inet[/11.35.194.42:9300]",
"attributes": {
"master": "false"
}
},
"x6qG0GMBR8ayKz0bCh86yw": {
"name": "server026",
"transport_address": "inet[/11.35.194.46:9300]",
"attributes": {
"master": "false"
}
},
"ukbZpOk9RjGcD5PCqRXLcA": {
"name": "server025",
"transport_address": "inet[/11.35.194.45:9300]",
"attributes": {
"master": "true"
}
}
},
"metadata": {
"templates": {},
"indices": {
"indexdata": {
"state": "open",
"settings": {
"index.number_of_replicas": "1",
"index.version.created": "900699",
"index.number_of_shards": "5",
"index.uuid": "Fg7O6wSXRWSx2e1UJrLkwg"
},
"aliases": []
},
"kibana-int": {
"state": "open",
"settings": {
"index.version.created": "900699",
"index.number_of_replicas": "1",
"index.uuid": "UZwkJqIMQRaWpP3OkP2vag",
"index.number_of_shards": "5"
},
"aliases": []
}
}
},
"routing_table": {
"indices": {
"indexdata": {
"shards": {
"0": [
{
"state": "STARTED",
"primary": true,
"node": "cJTSFsjFQ7iIotT6aapjBQ",
"relocating_node": null,
"shard": 0,
"index": "indexdata"
},
{
"state": "INITIALIZING",
"primary": false,
"node": "Ej4pX5KdTKmJ7tBwXYvN8w",
"relocating_node": null,
"shard": 0,
"index": "indexdata"
}
],
"1": [
{
"state": "INITIALIZING",
"primary": true,
"node": "bXSlI6iUR6KIQHHOid1bPw",
"relocating_node": null,
"shard": 1,
"index": "indexdata"
},
{
"state": "UNASSIGNED",
"primary": false,
"node": null,
"relocating_node": null,
"shard": 1,
"index": "indexdata"
}
],
"2": [
{
"state": "STARTED",
"primary": false,
"node": "llX7KOpLTyajgLBuMn8y8g",
"relocating_node": null,
"shard": 2,
"index": "indexdata"
},
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 2,
"index": "indexdata"
}
],
"3": [
{
"state": "INITIALIZING",
"primary": true,
"node": "bXSlI6iUR6KIQHHOid1bPw",
"relocating_node": null,
"shard": 3,
"index": "indexdata"
},
{
"state": "UNASSIGNED",
"primary": false,
"node": null,
"relocating_node": null,
"shard": 3,
"index": "indexdata"
}
],
"4": [
{
"state": "INITIALIZING",
"primary": true,
"node": "bXSlI6iUR6KIQHHOid1bPw",
"relocating_node": null,
"shard": 4,
"index": "indexdata"
},
{
"state": "UNASSIGNED",
"primary": false,
"node": null,
"relocating_node": null,
"shard": 4,
"index": "indexdata"
}
]
}
},
"kibana-int": {
"shards": {
"0": [
{
"state": "STARTED",
"primary": false,
"node": "Ej4pX5KdTKmJ7tBwXYvN8w",
"relocating_node": null,
"shard": 0,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 0,
"index": "kibana-int"
}
],
"1": [
{
"state": "STARTED",
"primary": false,
"node": "Ej4pX5KdTKmJ7tBwXYvN8w",
"relocating_node": null,
"shard": 1,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 1,
"index": "kibana-int"
}
],
"2": [
{
"state": "STARTED",
"primary": false,
"node": "cJTSFsjFQ7iIotT6aapjBQ",
"relocating_node": null,
"shard": 2,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 2,
"index": "kibana-int"
}
],
"3": [
{
"state": "STARTED",
"primary": false,
"node": "cJTSFsjFQ7iIotT6aapjBQ",
"relocating_node": null,
"shard": 3,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 3,
"index": "kibana-int"
}
],
"4": [
{
"state": "STARTED",
"primary": false,
"node": "cJTSFsjFQ7iIotT6aapjBQ",
"relocating_node": null,
"shard": 4,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 4,
"index": "kibana-int"
}
]
}
}
}
},
"routing_nodes": {
"unassigned": [
{
"state": "UNASSIGNED",
"primary": false,
"node": null,
"relocating_node": null,
"shard": 1,
"index": "indexdata"
},
{
"state": "UNASSIGNED",
"primary": false,
"node": null,
"relocating_node": null,
"shard": 3,
"index": "indexdata"
},
{
"state": "UNASSIGNED",
"primary": false,
"node": null,
"relocating_node": null,
"shard": 4,
"index": "indexdata"
}
],
"nodes": {
"kM1vUt6KRoydpHRZADP-yw": [],
"bXSlI6iUR6KIQHHOid1bPw": [
{
"state": "INITIALIZING",
"primary": true,
"node": "bXSlI6iUR6KIQHHOid1bPw",
"relocating_node": null,
"shard": 1,
"index": "indexdata"
},
{
"state": "INITIALIZING",
"primary": true,
"node": "bXSlI6iUR6KIQHHOid1bPw",
"relocating_node": null,
"shard": 3,
"index": "indexdata"
},
{
"state": "INITIALIZING",
"primary": true,
"node": "bXSlI6iUR6KIQHHOid1bPw",
"relocating_node": null,
"shard": 4,
"index": "indexdata"
}
],
"cJTSFsjFQ7iIotT6aapjBQ": [
{
"state": "STARTED",
"primary": true,
"node": "cJTSFsjFQ7iIotT6aapjBQ",
"relocating_node": null,
"shard": 0,
"index": "indexdata"
},
{
"state": "STARTED",
"primary": false,
"node": "cJTSFsjFQ7iIotT6aapjBQ",
"relocating_node": null,
"shard": 2,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": false,
"node": "cJTSFsjFQ7iIotT6aapjBQ",
"relocating_node": null,
"shard": 3,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": false,
"node": "cJTSFsjFQ7iIotT6aapjBQ",
"relocating_node": null,
"shard": 4,
"index": "kibana-int"
}
],
"Ej4pX5KdTKmJ7tBwXYvN8w": [
{
"state": "INITIALIZING",
"primary": false,
"node": "Ej4pX5KdTKmJ7tBwXYvN8w",
"relocating_node": null,
"shard": 0,
"index": "indexdata"
},
{
"state": "STARTED",
"primary": false,
"node": "Ej4pX5KdTKmJ7tBwXYvN8w",
"relocating_node": null,
"shard": 0,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": false,
"node": "Ej4pX5KdTKmJ7tBwXYvN8w",
"relocating_node": null,
"shard": 1,
"index": "kibana-int"
}
],
"llX7KOpLTyajgLBuMn8y8g": [
{
"state": "STARTED",
"primary": false,
"node": "llX7KOpLTyajgLBuMn8y8g",
"relocating_node": null,
"shard": 2,
"index": "indexdata"
}
],
"x6qG0GMBR8ayKz0bCh86yw": [],
"ukbZpOk9RjGcD5PCqRXLcA": [
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 2,
"index": "indexdata"
},
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 0,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 1,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 2,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 3,
"index": "kibana-int"
},
{
"state": "STARTED",
"primary": true,
"node": "ukbZpOk9RjGcD5PCqRXLcA",
"relocating_node": null,
"shard": 4,
"index": "kibana-int"
}
]
}
},
"allocations": []
}

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d392c3e0-3c46-434b-87fb-28108b02d79f%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.