We are extensive user of tribe nodes. We have multiple data centers and we need to federate the clusters.
Up to now we've been staying on 1.7.x series as the 2.x series break too many things. I am in the process of bringing up a test cluster running ES 2.2.0 and finally got around to testing tribe in 2.2. I found that tribe does not pass along plugins config values in elasticsearch.yml file. This breaks all plugins that need to get non-default configs from this file.
Ah.... I should have realize that it is the same bug.
ALL of the plugin properties must now be put under the tribe section for them to get passed through!!!
The only thing left is this annoying error in the log due to tribe being a client and one of the downstream cluster happen to have marvel on it.
[2016-03-02 22:25:10,888][WARN ][cluster.service ] [ela4-app7246] failed to notify ClusterStateListener
java.lang.IllegalStateException: master not available when registering auto-generated license
at org.elasticsearch.license.plugin.core.LicensesService.requestTrialLicense(LicensesService.java:749)
at org.elasticsearch.license.plugin.core.LicensesService.clusterChanged(LicensesService.java:483)
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:600)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:762)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Unless you use the build that has the fix for the reported github reference, I think you'll still need to put in the path.* for plugins, even though it's been said "you dont have to".
When you have kibana and/or marvel running with the tribe node setup, on startup, kibana does not know which cluster it should create .kibana (default) index. The link to a post below has a reference link describing what could be done.
The new version of marvel needs kibana and it needs to create .marvel- indices. The same issue here, it needs to know which cluster it should drop the daily index. I think by creating or using a separate "monitoring cluster", following the instructions at the link below, the marvel issue can be solved. I've not done this but I think it will help addressing the issue.
Oh, I do have all the path.* specified. I built my own jars with added debugging output to find out that ES make use of path.*
FYI. It is not kibana that decide which cluster to create .kibana index. It is ES that determine that via the tribe.on_conflict: prefer_cluster config. See my blog, http://blog.tinle.org/?p=490
My clusters are all working fine now using ES 2.2.0 with tribe node. The next thing is to test if tribe node has been fixed to work with more than a few dozen downstream clusters. At the moment, ES 1.X series tribe fall over with OOM after a short time if there are more than a number of clusters.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.