[marvel.agent ] background thread had an uncaught exception ElasticsearchException[failed to flush exporter bulks on today's index

I started with elasticsearch 1.6.0. To upgraded it to 5.2 based on the recommendation that I upgraded it to 2.4.4 first. As of now I am still at 2.4 with Kibana 4.6.4.

I have one node PerfNode_1 with the data migrated from 1.6.0. The node has been complaining .marvel-es-1-2017-03.06 is red and the rest indices are yellow. After some googling I decided to delete it and let it recreated. But once it's recreated it's still red. Because of that Marvel is not receiving data either.

This is my log:
017.03.06][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest to [.marvel-es-1-2017.03.06] containing [166] requests]]]];
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:106)
... 3 more
Caused by: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:
[0]: index [.marvel-es-1-2017.03.06], type [index_stats], id [AVqlGSdIF4aaLd2WG6c7], message [UnavailableShardsException[[.marvel-es-1-2017.03.06][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest to [.marvel-es-1-2017.03.06] containing [166] requests]]]
[1]: index [.marvel-es-1-2017.03.06], type [index_stats], id [AVqlGSdIF4aaLd2WG6c8], message [UnavailableShardsException[[.marvel-es-1-2017.03.06][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest to [.marvel-es-1-2017.03.06] containing [166] requests]]]

This is my cluster indices:
[tw@ultva02 kibana]$ curl -XGET 10.170.98.33:9200/_cat/indices?pretty
yellow open .marvel-2017.02.08 1 1 11865 0 16.7mb 16.7mb
yellow open .marvel-2017.02.09 1 1 143172 0 191.7mb 191.7mb
yellow open .marvel-2017.02.28 1 1 293865 0 339.6mb 339.6mb
yellow open .marvel-2017.02.06 1 1 119778 0 170.6mb 170.6mb
yellow open .marvel-2017.02.07 1 1 127153 0 174.3mb 174.3mb
yellow open .marvel-2017.02.04 1 1 102794 0 153.2mb 153.2mb
yellow open .marvel-2017.02.26 1 1 280068 0 326.7mb 326.7mb
yellow open .marvel-2017.02.27 1 1 287036 0 334mb 334mb
yellow open .marvel-2017.02.05 1 1 111321 0 162.7mb 162.7mb
yellow open .marvel-2017.02.24 1 1 258614 0 304.5mb 304.5mb
yellow open .marvel-2017.02.02 1 1 1579356 0 737.2mb 737.2mb
yellow open .marvel-2017.02.25 1 1 272785 0 319.3mb 319.3mb
yellow open .marvel-2017.02.03 1 1 92276 0 138.9mb 138.9mb
yellow open .marvel-2017.02.22 1 1 238327 0 242.4mb 242.4mb
yellow open .marvel-2017.02.01 1 1 887400 0 387.4mb 387.4mb
yellow open .marvel-2017.02.23 1 1 250149 0 294.5mb 294.5mb
yellow open .marvel-2017.02.20 1 1 220623 0 208.4mb 208.4mb
yellow open .marvel-2017.02.21 1 1 228373 0 211.9mb 211.9mb
yellow open .marvel-es-1-2017.03.02 1 1 69228 580 22.8mb 22.8mb
yellow open .marvel-es-1-2017.03.03 1 1 372978 944 139.5mb 139.5mb
yellow open .kibana 1 1 2 0 8.9kb 8.9kb
yellow open .marvel-es-data-1 1 1 22 1 2.4mb 2.4mb
yellow open .marvel-kibana 5 1 2 1 20.4kb 20.4kb
yellow open staging 5 1 395164 0 78.5mb 78.5mb
yellow open blotterdata 5 1 3963291 0 981.7mb 981.7mb
yellow open .marvel-2017.02.19 1 1 212739 0 202.1mb 202.1mb
yellow open .marvel-2017.02.17 1 1 197985 0 190.1mb 190.1mb
yellow open .marvel-2017.02.18 1 1 205533 0 196.8mb 196.8mb
yellow open .marvel-2017.02.15 1 1 182193 0 177.6mb 177.6mb
yellow open index 5 1 4 0 12.9kb 12.9kb
yellow open .marvel-2017.02.16 1 1 189990 0 183.3mb 183.3mb
yellow open .marvel-2017.02.13 1 1 166753 0 164.8mb 164.8mb
yellow open .marvel-2017.02.14 1 1 174459 0 171.7mb 171.7mb
yellow open .marvel-2017.02.11 1 1 150836 0 152.4mb 152.4mb
yellow open .marvel-2017.03.01 1 1 303346 0 317.3mb 317.3mb
yellow open .marvel-2017.03.02 1 1 214931 0 185.7mb 185.7mb
yellow open .marvel-2017.02.12 1 1 158835 0 159mb 159mb
red open .marvel-es-1-2017.03.06 1 1
yellow open .marvel-2017.02.10 1 1 148360 0 182.6mb 182.6mb
yellow open report 5 1 133623382 0 21.7gb 21.7gb

My elasticsearch.yml. The other node has exactly the same except node.name and node.host. the rest are defaults.
cluster.name: elasticsearch_tw
node.name: "PerfNode_1"
node.rack: PerfNode_1
path.data: /home/elasticsearch-1.6.0/data
indices.recovery.max_bytes_per_sec: 500mb
bootstrap.mlockall: true
network.host: 10.170.98.33
cluster.routing.allocation.awareness.attributes: rack
http.cors.enabled: true
http.cors.allow-origin: /.*/
http.cors.allow-credentials: true
path.repo: /home/elasticsearch-1.6.0/data/backup

I followed this https://github.com/elastic/elasticsearch/issues/16708 and run allocate on the last index
curl -XPOST http://localhost:9200/_cluster/reroute?pretty -d '{ "commands" : [ { "allocate" : { "index" : ".marvel-es-data", "shard" : 0, "node" :"i-e098e03a", "allow_primary": "true" } } ] }'

Now my indices are all yellow and my Marvel works again.

Glad you were able to figure it out. FYI, you can go ahead and delete all of the old Marvel indices:

curl -XDELETE 'http://localhost:9200/.marvel-2017*'

They won't be useful to you anymore.

Good luck,
Chris

Is the new indices named as .marvel-es-1-yyyy-mm.dd? If that's the case I should delete all old .marvel-2017* indices instead of migrating them?

@Jules_Huang

Exactly! There is no benefit to keeping around your Marvel 1.x indices. Also, given the intent to try to get to 5.x, indices from 1.x are incompatible with 5.x, so deleting them solves that issue as well.

Hope that helps,
Chris

1 Like

Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.