Errors after installing X-Pack

yeah, that is good point.

After I changed it from 3 to 2, I am getting another error:


[2017-01-18T00:15:05,182][INFO ][o.e.x.m.e.Exporters      ] [dev-ore-elasticsearch-data-i-029930af20d] skipping exporter [my_local] as it isn't ready yet
[2017-01-18T00:15:05,182][ERROR][o.e.x.m.AgentService     ] [dev-ore-elasticsearch-data-i-029930af206d] exception when exporting documents
org.elasticsearch.xpack.monitoring.exporter.ExportException: exporters are either not ready or faulty
	at org.elasticsearch.xpack.monitoring.exporter.Exporters.export(Exporters.java:188) ~[x-pack-5.1.1.jar:5.1.1]
	at org.elasticsearch.xpack.monitoring.AgentService$ExportingWorker.run(AgentService.java:208) [x-pack-5.1.1.jar:5.1.1]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]

exporters are either not ready or faulty, should I define my own exporter, i thought it should use the default one which is local?

As long as that exception is not consistently happening, then it's not a big deal. What it means is that the line above it caused it to fail (this code is actually cleaner in v5.1.2 and the forthcoming v5.2.0).

[2017-01-18T00:15:05,182][INFO ][o.e.x.m.e.Exporters ] [dev-ore-elasticsearch-data-i-029930af20d] skipping exporter [my_local] as it isn't ready yet

Usually this means that the elected master node has not yet configured the Monitoring templates and pipeline, which should happen by the next pass. If the error only happened once, then that was the case.

If it keeps happening, then it generally means that you have not installed x-pack on whatever node happens to be the elected master node, or xpack.monitoring.enabled is false there (or exporters are disabled, so lots of ways to get there). That type of error is also made clearer in the next release.

The error is keeping happening.

I have checked that on the 3 eligible master node:
1: x-pack has been installed.
2: xpack.monitoring.enabled is default to true
3: not sure how to check if the exporters are enabled or not.

Any other cases causing it?

And If i change it back to prive ips instead of elb, the skipping exporter [my_local] is gone

I ended up having this error happening and for the sake of anyone else who may stumble on this page this was my situation:

After an upgrade, there was no monitoring data and the monitoring indices were red, looking in the logs I saw similar logs as above.

I had it happen while performing a rolling upgrade which required me to disable / enable allocation as each node was rotated out. I forgot to re-enable allocation on the last, hence my monitoring indices were coming up red and I was getting the failed to flush export bulks exception.

Also here is quick solution for fixing:

GET _cluster/settings
PUT _cluster/settings
{
"persistent":{"cluster.routing.allocation.enable": "all"}
}
PUT _cluster/settings
{
"persistent":{"cluster.routing.allocation.enable": "none"}
}
1 Like

I tried this but nothing..

Only worked excluding data from nodes and restarting.

I do have ingest nodes but still getting the same error, here are some details logs.
ES - 6.4.3 version

Caused by: org.elasticsearch.transport.RemoteTransportException: [es-data-hdd05-56f5b5cbd8-hn9rt][10.42.4.25:9300][indices:data/write/bulk[s]]

6/4/2019 12:03:20 PM Caused by: org.elasticsearch.transport.RemoteTransportException: [es-data-hdd05-56f5b5cbd8-hn9rt][10.42.4.25:9300][indices:data/write/bulk[s][p]]

6/4/2019 12:03:20 PM Caused by: org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution of org.elasticsearch.transport.TransportService$7@8beec98 on EsThreadPoolExecutor[name = es-data-hdd05-56f5b5cbd8-hn9rt/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@6494a51f[Running, pool size = 22, active threads = 22, queued tasks = 207, completed tasks = 843592404]]

6/4/2019 12:03:20 PM at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:48) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:832) ~[?:?]

6/4/2019 12:03:20 PM at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365) ~[?:?]

6/4/2019 12:03:20 PM at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.doExecute(EsThreadPoolExecutor.java:98) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:93) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:661) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.transport.TransportService.access$000(TransportService.java:75) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.transport.TransportService$3.sendRequest(TransportService.java:131) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:605) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:524) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:512) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.performAction(TransportReplicationAction.java:825) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.performLocalAction(TransportReplicationAction.java:743) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:731) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:169) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.action.support.replication.TransportReplicationAction.doExecute(TransportReplicationAction.java:97) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.action.support.replication.TransportReplicationAction$OperationTransportHandler.messageReceived(TransportReplicationAction.java:251) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.action.support.replication.TransportReplicationAction$OperationTransportHandler.messageReceived(TransportReplicationAction.java:243) ~[elasticsearch-6.4.3.jar:6.4.3]

6/4/2019 12:03:20 PM at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.4.3.jar:6.4.3]

@akshay_singh2
Please don't resurrect old threads in order to ask a question like this.
Your problem is not the same, please stay a new thread.