Failed to start kibana after adding more elasticsearch.host

hello.

am using Centos 7, kibana 7.6,elasticsearch 7.6.

i have problems with my kibana. It won't start after adding 1 more elasticsearch.host on kibana config. Before that i only have 2 elasticsearch host in my config. and then there is no specify error i can't found in my configuration.
BUT
after i rollback my configuration to deleted the elasticsearch.host. it work well
anyone know what happent?
or can i get reference for it?

here the error from systemctl status kibana

● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mon 2020-04-13 22:25:53 WIB; 11h ago
Process: 4798 ExecStart=/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml (code=exited, status=1/FAILURE)
Main PID: 4798 (code=exited, status=1/FAILURE)

Apr 13 22:25:50 monitoring systemd[1]: Unit kibana.service entered failed state.
Apr 13 22:25:50 monitoring systemd[1]: kibana.service failed.
Apr 13 22:25:53 monitoring systemd[1]: kibana.service holdoff time over, scheduling restart.
Apr 13 22:25:53 monitoring systemd[1]: Stopped Kibana.
Apr 13 22:25:53 monitoring systemd[1]: start request repeated too quickly for kibana.service
Apr 13 22:25:53 monitoring systemd[1]: Failed to start Kibana.
Apr 13 22:25:53 monitoring systemd[1]: Unit kibana.service entered failed state.
Apr 13 22:25:53 monitoring systemd[1]: kibana.service failed.

there is my kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "monitoring"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
server.name: "ASDP-MONITORING"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://100.XXX.XXX.XXX:9200", "http://100.XXX.XXX.XXX:9200", "http://100.XXX.XXX.XXX:9200"]

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

There is the log from Journalctl -fu kibana.service

-- Logs begin at Mon 2020-04-13 18:23:51 WIB. --
Apr 13 22:25:49 monitoring kibana[4798]: FATAL  [index_not_found_exception] no such index [.kibana_task_manager], with { resource.type="index_or_alias" & resource.id=".kibana_task_manager" & index_uuid="_na_" & index=".kibana_task_manager" } :: {"path":"/.kibana_task_manager/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"graph-workspace\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.graph-workspace\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"space\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.space\":\"6.6.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"map\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.map\":\"7.6.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"canvas-workpad\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.canvas-workpad\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"task\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.task\":\"7.6.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"7.6.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"visualization\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.visualization\":\"7.4.2\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"dashboard\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.dashboard\":\"7.3.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"search\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.search\":\"7.4.0\"}}}}]}}]}}}","statusCode":404,"response":"{\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [.kibana_task_manager]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\".kibana_task_manager\",\"index_uuid\":\"_na_\",\"index\":\".kibana_task_manager\"}],\"type\":\"index_not_found_exception\",\"reason\":\"no such index [.kibana_task_manager]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\".kibana_task_manager\",\"index_uuid\":\"_na_\",\"index\":\".kibana_task_manager\"},\"status\":404}"}
Apr 13 22:25:50 monitoring systemd[1]: kibana.service: main process exited, code=exited, status=1/FAILURE
Apr 13 22:25:50 monitoring systemd[1]: Unit kibana.service entered failed state.
Apr 13 22:25:50 monitoring systemd[1]: kibana.service failed.
Apr 13 22:25:53 monitoring systemd[1]: kibana.service holdoff time over, scheduling restart.
Apr 13 22:25:53 monitoring systemd[1]: Stopped Kibana.
Apr 13 22:25:53 monitoring systemd[1]: start request repeated too quickly for kibana.service
Apr 13 22:25:53 monitoring systemd[1]: Failed to start Kibana.
Apr 13 22:25:53 monitoring systemd[1]: Unit kibana.service entered failed state.
Apr 13 22:25:53 monitoring systemd[1]: kibana.service failed.

`

Have you set discovery.seed_hosts ? while adding a node to a cluster running on multiple machines so that the new node can discover the rest of its cluster.? (https://www.elastic.co/guide/en/elasticsearch/reference/current/discovery-settings.html#unicast.hosts)
When you start a brand new Elasticsearch cluster for the very first time, there is a cluster bootstrapping step, which determines the set of master-eligible nodes whose votes are counted in the very first election. In development mode, with no discovery settings configured, this step is automatically performed by the nodes themselves. As this auto-bootstrapping is inherently unsafe, when you start a brand new cluster in production mode, you must explicitly list the master-eligible nodes whose votes should be counted in the very first election. This list is set using the cluster.initial_master_nodes setting. You should not use this setting when restarting a cluster or adding a new node to an existing cluster. I think this is the issue you are facing. Let us know if not . We can debug further ...

Thanks
Rashmi

Thanks for your respond Mrs. rashmi

Yes of course i have already to setup the discovery.seed.host and adding the initial.master.nodes In Elasticsearch.yml.

Here is my elasticsearch.yml for your information

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: ASDP-MONITORING
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: monitoring
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 100.xxx.xxx.xxx
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: [100.xxx.xx.xxx", "100.xxx.xxx.xxx", "100.xxx.xxx.xxx]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["100.xxx.xxx.xxx", "100.xxx.xxx.xxx"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

As i say before, i do some check for my configuration to make sure it is work or not, i do a rollback to my old configuration. it is just deleted the new host in elastic.yml and kibana.yml and it works fine. no error and the kibana is show up at the browser. do you recognize this problems?

oh ya. it show up some log after i restarting the kibana with systemctl restart kibana but this log only temporary logs from kibana. after i wait in 5-10 minute the logs was dissapear.

there is logs from systemctl status kibana.service

Apr 14 13:59:33 monitoring kibana[7196]: {"type":"log","@timestamp":"2020-04-14T06:59:33Z","tags":["info","plugins","bfetch"],"pid":7196,"message":"Setting up plugin"}
Apr 14 13:59:33 monitoring kibana[7196]: {"type":"log","@timestamp":"2020-04-14T06:59:33Z","tags":["info","savedobjects-service"],"pid":7196,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
Apr 14 13:59:33 monitoring kibana[7196]: {"type":"log","@timestamp":"2020-04-14T06:59:33Z","tags":["info","savedobjects-service"],"pid":7196,"message":"Starting saved objects migrations"}
Apr 14 13:59:33 monitoring kibana[7196]: Could not create APM Agent configuration: [resource_already_exists_exception] index [.apm-agent-configuration/UpEDn5W2R6GS2ZEhdqFWJw] already exists, with { index_uuid="UpEDn5W2R6GS2ZEhdqFWJw" & index=".apm-agent-configuration" }
Apr 14 13:59:33 monitoring kibana[7196]: {"type":"log","@timestamp":"2020-04-14T06:59:33Z","tags":["info","savedobjects-service"],"pid":7196,"message":"Creating index .kibana_task_manager_2."}
Apr 14 13:59:33 monitoring kibana[7196]: {"type":"log","@timestamp":"2020-04-14T06:59:33Z","tags":["warning","savedobjects-service"],"pid":7196,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_2/3IQS0ZeiQSSRe-EhoF-SJQ] already exists, with { index_uuid=\"3IQS0ZeiQSSRe-EhoF-SJQ\" & index=\".kibana_task_manager_2\" }"}
Apr 14 13:59:33 monitoring kibana[7196]: {"type":"log","@timestamp":"2020-04-14T06:59:33Z","tags":["warning","savedobjects-service"],"pid":7196,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_2 and restarting Kibana."}
Apr 14 13:59:33 monitoring kibana[7196]: {"type":"log","@timestamp":"2020-04-14T06:59:33Z","tags":["info","savedobjects-service"],"pid":7196,"message":"Creating index .kibana_1."}
Apr 14 13:59:33 monitoring kibana[7196]: {"type":"log","@timestamp":"2020-04-14T06:59:33Z","tags":["warning","savedobjects-service"],"pid":7196,"message":"Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_1/PyCKWCr-QLGBoDAJtXIEfw] already exists, with { index_uuid=\"PyCKWCr-QLGBoDAJtXIEfw\" & index=\".kibana_1\" }"}
Apr 14 13:59:33 monitoring kibana[7196]: {"type":"log","@timestamp":"2020-04-14T06:59:33Z","tags":["warning","savedobjects-service"],"pid":7196,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana."}

@jbudz - what could be the issue here?

Thanks
Rashmi

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.