After I changed it from 3 to 2, I am getting another error:
[2017-01-18T00:15:05,182][INFO ][o.e.x.m.e.Exporters ] [dev-ore-elasticsearch-data-i-029930af20d] skipping exporter [my_local] as it isn't ready yet
[2017-01-18T00:15:05,182][ERROR][o.e.x.m.AgentService ] [dev-ore-elasticsearch-data-i-029930af206d] exception when exporting documents
org.elasticsearch.xpack.monitoring.exporter.ExportException: exporters are either not ready or faulty
at org.elasticsearch.xpack.monitoring.exporter.Exporters.export(Exporters.java:188) ~[x-pack-5.1.1.jar:5.1.1]
at org.elasticsearch.xpack.monitoring.AgentService$ExportingWorker.run(AgentService.java:208) [x-pack-5.1.1.jar:5.1.1]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
exporters are either not ready or faulty, should I define my own exporter, i thought it should use the default one which is local?
As long as that exception is not consistently happening, then it's not a big deal. What it means is that the line above it caused it to fail (this code is actually cleaner in v5.1.2 and the forthcoming v5.2.0).
[2017-01-18T00:15:05,182][INFO ][o.e.x.m.e.Exporters ] [dev-ore-elasticsearch-data-i-029930af20d] skipping exporter [my_local] as it isn't ready yet
Usually this means that the elected master node has not yet configured the Monitoring templates and pipeline, which should happen by the next pass. If the error only happened once, then that was the case.
If it keeps happening, then it generally means that you have not installed x-pack on whatever node happens to be the elected master node, or xpack.monitoring.enabled is false there (or exporters are disabled, so lots of ways to get there). That type of error is also made clearer in the next release.
I have checked that on the 3 eligible master node:
1: x-pack has been installed.
2: xpack.monitoring.enabled is default to true
3: not sure how to check if the exporters are enabled or not.
Any other cases causing it?
And If i change it back to prive ips instead of elb, the skipping exporter [my_local] is gone
I ended up having this error happening and for the sake of anyone else who may stumble on this page this was my situation:
After an upgrade, there was no monitoring data and the monitoring indices were red, looking in the logs I saw similar logs as above.
I had it happen while performing a rolling upgrade which required me to disable / enable allocation as each node was rotated out. I forgot to re-enable allocation on the last, hence my monitoring indices were coming up red and I was getting the failed to flush export bulks exception.
Also here is quick solution for fixing:
GET _cluster/settings
PUT _cluster/settings
{
"persistent":{"cluster.routing.allocation.enable": "all"}
}
PUT _cluster/settings
{
"persistent":{"cluster.routing.allocation.enable": "none"}
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.