Kibana: Kibana's service started but Kibana not launched (Kibana multiple instances configuration)

Hi,

I'm just learning to implement Elastic Stack on Docker swarm configuration with global mode. My issue is Kibana's service started but Kibana not launched (not working).

PS I have followed this guidance.

My global configuration is:

  • 1 Elasticsearch cluster (3 Elasticsearch nodes with these roles "cdhimstw" and 3 Elasticsearch coordinating nodes).
  • 3 Kibana instances.
  • Each Kibana instance connected to 1 Elasticsearch coordinating nodes.
  • Elasticsearch and Kibana versions are 7.10

My Kibana's configuration is:

server.port: 5601
server.name: "kibana.${HOSTNAME}"
elasticsearch.hosts: ["http://elasticsearch-escon.${HOSTNAME}:9200"]
elasticsearch.username: "kibana"
elasticsearch.password: "elastic"
elasticsearch.requestTimeout: 300000
elasticsearch.shardTimeout: 300000
xpack.apm.enabled: false
xpack.security.enabled: true
xpack.security.encryptionKey: "ABCDEFGHIJKLMNOPQRSTUVWXYZ012345"
xpack.encryptedSavedObjects.encryptionKey: "ABCDEFGHIJKLMNOPQRSTUVWXYZ012345"
xpack.reporting.enabled: true
xpack.reporting.encryptionKey: "ABCDEFGHIJKLMNOPQRSTUVWXYZ012345"

Kibana instance 1, 2 and 3 logs are pretty much same, so I post from instance 1:

loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:17:31Z","tags":["info","plugins","taskManager","taskManager"],"pid":7,"message":"TaskManager is identified by the Kibana UUID: e85a029e-2599-4a47-a200-521488582df5"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:17:32Z","tags":["info","plugins","crossClusterReplication"],"pid":7,"message":"Your basic license does not support crossClusterReplication. Please upgrade your license."}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:17:32Z","tags":["info","plugins","watcher"],"pid":7,"message":"Your basic license does not support watcher. Please upgrade your license."}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:17:32Z","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":7,"message":"Starting monitoring stats collection"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:17:42Z","tags":["listening","info"],"pid":7,"message":"Server running at http://localhost:5601"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:17:43Z","tags":["info","http","server","Kibana"],"pid":7,"message":"http server running at http://localhost:5601"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:17:45Z","tags":["warning","plugins","reporting"],"pid":7,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:17:47Z","tags":["error","elasticsearch","data"],"pid":7,"message":"[version_conflict_engine_exception]: [canvas-workpad-template:workpad-template-061d7868-2b4e-4dc8-8bf7-3772b52926e5]: version conflict, document already exists (current version [1])"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:17:48Z","tags":["error","elasticsearch","data"],"pid":7,"message":"[version_conflict_engine_exception]: [config:7.10.2]: version conflict, document already exists (current version [1])"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:18:04Z","tags":["warning","plugins","usageCollection","collector-set"],"pid":7,"message":"{ Error: [search_phase_execution_exception] all shards failed\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/lodash/lodash.js:4949:19)\n at IncomingMessage.emit (events.js:203:15)\n at endReadableNT (_stream_readable.js:1145:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 503,\n displayName: 'ServiceUnavailable',\n message: '[search_phase_execution_exception] all shards failed',\n path: '/.monitoring-kibana-6-%2C.monitoring-kibana-7-/_search',\n query:\n { size: 0,\n ignore_unavailable: true,\n filter_path: 'aggregations.uuids.buckets' },\n body:\n { error:\n { root_cause: ,\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: },\n status: 503 },\n statusCode: 503,\n response:\n '{"error":{"root_cause":,"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":},"status":503}',\n toString: [Function],\n toJSON: [Function] }"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:18:04Z","tags":["warning","plugins","usageCollection","collector-set"],"pid":7,"message":"Unable to fetch data from monitoring collector"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:18:10Z","tags":["error","elasticsearch","data"],"pid":7,"message":"[version_conflict_engine_exception]: [task:Lens-lens_telemetry]: version conflict, document already exists (current version [4])"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:18:10Z","tags":["error","elasticsearch","data"],"pid":7,"message":"[version_conflict_engine_exception]: [task:Alerting-alerting_telemetry]: version conflict, document already exists (current version [4])"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:18:10Z","tags":["error","elasticsearch","data"],"pid":7,"message":"[version_conflict_engine_exception]: [task:Actions-actions_telemetry]: version conflict, document already exists (current version [4])"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:18:10Z","tags":["error","elasticsearch","data"],"pid":7,"message":"[version_conflict_engine_exception]: [task:endpoint:user-artifact-packager:1.0.0]: version conflict, document already exists (current version [4])"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:18:10Z","tags":["error","elasticsearch","data"],"pid":7,"message":"[version_conflict_engine_exception]: [config:7.10.2]: version conflict, document already exists (current version [1])"}
loganalyticsswarm2_loganalyticskibana.0.xilut484wv9l@loganalytics1 | {"type":"log","@timestamp":"2021-01-24T05:18:16Z","tags":["error","elasticsearch","data"],"pid":7,"message":"[version_conflict_engine_exception]: [space:default]: version conflict, document already exists (current version [1])"}

Any suggestion or solution for this issue?

Regards,
Ade

What is the current state of Elasticsearch?

Health status is green and all shards are started.

For the love of God. What I missed was leaving Kibana's listening address on its default configuration. So, what I need to do is just add this command server.host: "0.0.0.0" in Kibana's configuration and boom it's working as I planned, Thank you for your respond!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.