Kibana on tribe node

Hello,

I'm trying to set up a kibana on a tribe node which should work across multiple ES clusters. We're running alpha5.

I understand there's a limitation on the tribe node which would not allow any index creation from the tribe node, and we have followed the workaround from the github #3114.

After implementing the workaround by pointing the kibana first to the master node, I managed to get the .kibana index created and I can see the index just fine from the tribe node. I have also created an index pattern, visualization, dashboard, and saved search. There is no problem whatsoever on kibana when pointing it to the master node.

The problem is, after repointing the kibana to the tribe node, I still could not login to kibana and the following error messages are shown on the kibana.stdout log:

{"type":"log","@timestamp":"2016-09-15T06:22:00Z","tags":["status","plugin:xpack_main@5.0.0-alpha5-SNAPSHOT","error"],"pid":15467,"state":"red","message":"Status changed from red to red - Request Timeout after 30000ms","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","@timestamp":"2016-09-15T06:22:00Z","tags":["status","plugin:graph@5.0.0-alpha5-SNAPSHOT","error"],"pid":15467,"state":"red","message":"Status changed from red to red - Request Timeout after 30000ms","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","@timestamp":"2016-09-15T06:22:00Z","tags":["status","plugin:reporting@5.0.0-alpha5-SNAPSHOT","error"],"pid":15467,"state":"red","message":"Status changed from red to red - Request Timeout after 30000ms","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","@timestamp":"2016-09-15T06:22:00Z","tags":["status","plugin:security@5.0.0-alpha5-SNAPSHOT","error"],"pid":15467,"state":"red","message":"Status changed from red to red - Request Timeout after 30000ms","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","@timestamp":"2016-09-15T06:22:10Z","tags":["error","monitoring-ui"],"pid":15467,"level":"error","message":"Request Timeout after 30000ms","error":{"message":"Request Timeout after 30000ms","name":"Error","stack":"Error: Request Timeout after 30000ms\n    at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:336:15\n    at [object Object].<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:365:7)\n    at Timer.listOnTimeout (timers.js:92:15)"}}

I'm not really sure if the error has anything to do with the tribe node, or is it related to something else entirely.

Appreciate any input here,

Did you install x-pack on the tribe node too?

Yes, I did. Are we not supposed to?

# ./kibana-plugin list
timelion@5.0.0-alpha5
x-pack@5.0.0-alpha5-SNAPSHOT

Found a couple more error messages on ES log which seems to indicate it tries to create some indices:

[2016-09-19 16:58:33,210][DEBUG][action.admin.indices.create] [dcmarhel6d] timed out while retrying [indices:admin/create] after failure (timeout [1m])
[2016-09-19 16:58:33,210][DEBUG][action.admin.indices.create] [dcmarhel6d] timed out while retrying [indices:admin/create] after failure (timeout [1m])
[2016-09-19 16:58:33,239][INFO ][xpack.security.audit.logfile] [transport] [access_granted] origin_type=[rest], origin_address=[127.0.0.1], principal=[dcma.appservice], action=[indices:data/write/bulk]
[2016-09-19 16:58:33,239][INFO ][xpack.security.audit.logfile] [transport] [access_granted] origin_type=[rest], origin_address=[127.0.0.1], principal=[dcma.appservice], action=[indices:admin/create], indices=[_data]
[2016-09-19 16:58:33,240][DEBUG][action.admin.indices.create] [dcmarhel6d] no known master node, scheduling a retry
[2016-09-19 16:58:33,240][INFO ][xpack.security.audit.logfile] [transport] [access_granted] origin_type=[rest], origin_address=[127.0.0.1], principal=[dcma.appservice], action=[indices:admin/create], indices=[_xpack]
[2016-09-19 16:58:33,240][DEBUG][action.admin.indices.create] [dcmarhel6d] no known master node, scheduling a retry

Tried to switch the node to be a data node instead, and everything seems to work fine.. but after reverting it back to a tribe node, the same error still appeared..