Tribe node configuration issues

Hi all,

I'm currently running into some issues with a tribe node configuration using Elasticsearch 1.5.2. I have two existing elasticsearch 1.5.2 clusters which I'd like to cross query using a tribe node.

I setup another independent machine running only elasticsearch 1.5.2, with the following config (names have been changed to protect the childrens):

cluster.name: poc-tribe node.name: "poc-tribe-node" node.master: false node.data: false path.logs: /var/log/elasticsearch network.host: 10.188.94.155 discovery.zen.ping.multicast.enabled: false tribe: summary: cluster.name: es_poc_site discovery: zen: ping: multicast: enabled: false unicast: hosts: - es-mae-site-639807.domain.com detail: cluster.name: es_poc_err discovery: zen: ping: multicast: enabled: false unicast: hosts: - es-poc-err-564481.domain.com blocks: write: false metadata: false indices: write: traffic*, details* on_conflict: prefer_summary

(continued because of 5000 character limit)

When I start the elasticsearch tribe node, I see the following in the logfile:

[2016-03-17 10:44:59,071][INFO ][node ] [poc-tribe-node] version[1.5.2], pid[19479], build[62ff986/2015-04-27T09:21:06Z] [2016-03-17 10:44:59,071][INFO ][node ] [poc-tribe-node] initializing ... [2016-03-17 10:44:59,074][INFO ][plugins ] [poc-tribe-node] loaded [], sites [] [2016-03-17 10:45:01,137][INFO ][node ] [poc-tribe-node/detail] version[1.5.2], pid[19479], build[62ff986/2015-04-27T09:21:06Z] [2016-03-17 10:45:01,137][INFO ][node ] [poc-tribe-node/detail] initializing ... [2016-03-17 10:45:01,138][INFO ][plugins ] [poc-tribe-node/detail] loaded [], sites [] [2016-03-17 10:45:01,912][INFO ][node ] [poc-tribe-node/detail] initialized [2016-03-17 10:45:01,912][INFO ][node ] [poc-tribe-node/summary] version[1.5.2], pid[19479], build[62ff986/2015-04-27T09:21:06Z] [2016-03-17 10:45:01,913][INFO ][node ] [poc-tribe-node/summary] initializing ... [2016-03-17 10:45:01,913][INFO ][plugins ] [poc-tribe-node/summary] loaded [], sites [] [2016-03-17 10:45:02,632][INFO ][node ] [poc-tribe-node/summary] initialized [2016-03-17 10:45:02,645][INFO ][node ] [poc-tribe-node] initialized [2016-03-17 10:45:02,645][INFO ][node ] [poc-tribe-node] starting ... [2016-03-17 10:45:02,708][INFO ][transport ] [poc-tribe-node] bound_address {inet[/10.188.94.155:9300]}, publish_address {inet[/10.188.94.155:9300]} [2016-03-17 10:45:02,717][INFO ][discovery ] [poc-tribe-node] poc-tribe/7tfOYJF3SISSlPWKPsSPow [2016-03-17 10:45:02,717][WARN ][discovery ] [poc-tribe-node] waited for 0s and no initial state was set by the discovery [2016-03-17 10:45:02,723][INFO ][http ] [poc-tribe-node] bound_address {inet[/10.188.94.155:9200]}, publish_address {inet[/10.188.94.155:9200]} [2016-03-17 10:45:02,723][INFO ][node ] [poc-tribe-node/detail] starting ... [2016-03-17 10:45:02,738][INFO ][transport ] [poc-tribe-node/detail] bound_address {inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/10.188.94.155:9301]} [2016-03-17 10:45:02,743][INFO ][discovery ] [poc-tribe-node/detail] es_poc_err/HKMuEmDaRYa4N_CJ-uei-w [2016-03-17 10:45:32,743][WARN ][discovery ] [poc-tribe-node/detail] waited for 30s and no initial state was set by the discovery [2016-03-17 10:45:32,743][INFO ][node ] [poc-tribe-node/detail] started [2016-03-17 10:45:32,743][INFO ][node ] [poc-tribe-node/summary] starting ... [2016-03-17 10:45:32,779][INFO ][transport ] [poc-tribe-node/summary] bound_address {inet[/0:0:0:0:0:0:0:0:9302]}, publish_address {inet[/10.188.94.155:9302]} [2016-03-17 10:45:32,780][INFO ][discovery ] [poc-tribe-node/summary] es_poc_site/uQ7uVP3RTIe4Vz4TfsCSLA [2016-03-17 10:46:02,780][WARN ][discovery ] [poc-tribe-node/summary] waited for 30s and no initial state was set by the discovery [2016-03-17 10:46:02,780][INFO ][node ] [poc-tribe-node/summary] started [2016-03-17 10:46:02,780][INFO ][node ] [poc-tribe-node] started [2016-03-17 10:46:24,018][DEBUG][action.admin.indices.create] [poc-tribe-node] no known master node, scheduling a retry [2016-03-17 10:46:54,020][DEBUG][action.admin.indices.create] [poc-tribe-node] observer: timeout notification from cluster service. timeout setting [30s], time since start [30s]

When I attempt to set the .kibana index on the tribe elasticsearch node, I get:

`user@poc-tribe-348196:~$ curl -XPUT '10.188.94.155:9200/.kibana' -d '{ "index.mapper.dynamic": true }' -v

  • Hostname was NOT found in DNS cache
  • Trying 10.188.94.155...
  • Connected to 10.188.94.155 (10.188.94.155) port 9200 (#0)

PUT /.kibana HTTP/1.1
User-Agent: curl/7.35.0
Host: 10.188.94.155:9200
Accept: /
Content-Length: 32
Content-Type: application/x-www-form-urlencoded

  • upload completely sent off: 32 out of 32 bytes

< HTTP/1.1 503 Service Unavailable
< Content-Type: application/json; charset=UTF-8
< Content-Length: 71
<

  • Connection #0 to host 10.188.94.155 left intact
    {"error":"MasterNotDiscoveredException[waited for [30s]]","status":503}`

(continued because of 5000 character limit)

However, if I query cluster health, I get:

user@poc-tribe-348196:~$ curl -XGET 10.188.94.155:9200/_cluster/health?pretty { "cluster_name" : "poc-tribe", "status" : "green", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 0, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "number_of_pending_tasks" : 0 }

I can also query both of the main elasticsearch clusters that I'd like to federate, with similar results. I'm stuck as to what to do now. I've poured over Tribe Node Documentation for 1.5.x but I'm not seeing any discrepancy in how I've configured the tribe node elasticsearch instance versus what's in the (rather sparse) documentation.

If anyone could point me to some additional documentation, or a better way of troubleshooting this, I'd appreciate it.

Regarding the issue you have with kibana, you can't create a .kibana index directly with the tribe node because it's a tribe node :slight_smile: sitting in a cluster that has no master node and data node. Yes, this tribe node is connected to two clusters in this case but it does not know where to put .kibana index if you are under the assumption that it should write to one of the clusters. I don't think it works that way. I'm familiar with ES v1.7.x and v2.1.x but from the v1.5.x link you included in your post, I think it is very similar to the later versions.

There is a discussion on this board related to this issue where you'll need to create .kibana index manually at one of the clusters that this tribe node is connected to.

I actually had the tribe elasticsearch node pointed to my kibana instances rather than the elasticsearch master nodes. Once I changed that config on the tribe node, everything worked beautiful. It was just me overlooking the fact I needed to use the elasticsearch nodes and not kibana.

@sfunk:
How did you get rid of the following:

[2016-03-17 10:46:24,018][DEBUG][action.admin.indices.create] [poc-tribe-node] no known master node, scheduling a retry