We have three elasticsearch boxes: In the three boxes:
- Master
- Data
- Search Load Balancer
- Kibana
- Logstash
is configured in all the three. I have been trying to configure Sheild and everything smooth and I even able to access the indices in secured fashion. But I have lot of marvel related errors in both master and data log files.
I have configured the following for marvel in search load balancer in all three boxes, do I need to do the same in master and data nodes as well?
Note: I don't see any marvel errors in SLB log file.
marvel.agent.exporters:
id1:
type: http
host: ["https://IP_ES1:9200","https://IP_ES2:9200","https://IP_ES3:9200"]
auth:
username: marvel
password: marvelPass
ssl:
truststore.path: /etc/elasticsearch_search/truststore.jks
truststore.password: marvelTrustPass
I'm confused.. since SLB is listening for ES_Data on 9200 port, it sounds sense configuring the above in SLB, but then how could I avoid errors in both master and data if I shouldn't configure the above setting in those two nodes?
The error is:
[2016-08-09 09:12:17,735][ERROR][marvel.agent ] [master_elk_IP_ES2] background thread had an uncaught exception
ElasticsearchException[failed to flush exporter bulks]
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:104)
at org.elasticsearch.marvel.agent.exporter.ExportBulk.close(ExportBulk.java:53)
at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:201)
at java.lang.Thread.run(Thread.java:745)
Suppressed: ElasticsearchException[failed to flush [default_local] exporter bulk]; nested: ElasticsearchException[unauthenticated request indices:data/write/bulk for user
Someone please help me.