I have ElasticSearch and Kibana 6.2.3 installed on Ubuntu 16.04
ElasticSearch is working, I can see my indexes and data via the ReST API.
When I try to start Kibana as a service I get the following entry repeated (with different pid each time) over and over again the service restarts every couple of seconds:
{"type":"log","@timestamp":"2018-04-23T19:45:29Z","tags":["status","plugin:kibana@6.2.3","info"],"pid":6855,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
If I start kibana using: /usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml
it starts fine.
It's creating and updating the log file in /var/log/kibana/kibana.log so it's not permissions on that file or directory as I've seen elsewhere. What else could it be?
In my mac installation, that's where the config directory is by default. It must be finding a config file somewhere because when I delete that dir and boot up kibana 6.2.3, it fails immediately.
The question is, where is it picking up your kibana.yml? Assuming it's different from the config you point to directly, then it's probably something in the file messing something up.
Do you know of any other kibana.yml files in your system?
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "https://localhost:9200"
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "home"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "admin"
#elasticsearch.password: "hidden"
elasticsearch.username: "kibanaserver"
elasticsearch.password: "hidden"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: true
#server.ssl.certificate: /etc/elasticsearch/kirk-key.pem
#server.ssl.key: /etc/elasticsearch/kirk.pem
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
elasticsearch.ssl.certificate: /etc/elasticsearch/kirk.pem
elasticsearch.ssl.key: /etc/elasticsearch/kirk-key.pem
searchguard.allow_client_certificates: true
elasticsearch.requestHeadersWhitelist: [ "Authorization", "sgtenant" ]
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/etc/elasticsearch/root-ca.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
elasticsearch.ssl.verificationMode: none
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
logging.dest: /var/log/kibana/kibana.log
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: true
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# X-Pack
#xpack.security.enabled: false
#thread_pool.search.type: fixed
thread_pool.search.size: 200
thread_pool.search.queue_size: 1000
network.bind_host: 0.0.0.0
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: leadent-log-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
node.master: true
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
transport.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["46.101.57.252", "206.189.22.210"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
######## Start Search Guard Demo Configuration ########
# WARNING: revise all the lines below before you go into production
searchguard.ssl.transport.pemcert_filepath: esnode.pem
searchguard.ssl.transport.pemkey_filepath: esnode-key.pem
searchguard.ssl.transport.pemtrustedcas_filepath: root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: esnode.pem
searchguard.ssl.http.pemkey_filepath: esnode-key.pem
searchguard.ssl.http.pemtrustedcas_filepath: root-ca.pem
searchguard.allow_unsafe_democertificates: true
searchguard.allow_default_init_sgindex: true
searchguard.authcz.admin_dn:
- CN=kirk,OU=client,O=client,L=test, C=de
searchguard.audit.type: internal_elasticsearch
searchguard.enable_snapshot_restore_privilege: true
searchguard.check_snapshot_restore_write_privileges: true
searchguard.restapi.roles_enabled: ["sg_all_access"]
discovery.zen.minimum_master_nodes: 1
node.max_local_storage_nodes: 3
######## End Search Guard Demo Configuration ########
I followed the instructions here except I've specified 6.2.3 as that's my ES version https://www.elastic.co/guide/en/kibana/current/deb.html
i.e.
sudo apt-get update && sudo apt-get install kibana=6.2.3
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service
sudo systemctl start kibana.service
However, I assume the /etc/systemd/system/kibana.service is the interesting one:
[Unit]
Description=Kibana
[Service]
Type=simple
User=kibana
Group=kibana
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/etc/default/kibana
EnvironmentFile=-/etc/sysconfig/kibana
ExecStart=/usr/share/kibana/bin/kibana "-c /etc/kibana/kibana.yml"
Restart=always
WorkingDirectory=/
[Install]
WantedBy=multi-user.target
In /etc/systemd/system/multi-user.target.wants/kibana.service I have:
[Unit]
Description=Kibana
[Service]
Type=simple
User=kibana
Group=kibana
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/etc/default/kibana
EnvironmentFile=-/etc/sysconfig/kibana
ExecStart=/usr/share/kibana/bin/kibana "-c /etc/kibana/kibana.yml"
Restart=always
WorkingDirectory=/
[Install]
WantedBy=multi-user.target
Output of systemctl status kibana.service is:
kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-04-24 15:23:06 UTC; 4s ago
Main PID: 27763 (node)
Tasks: 10
Memory: 112.7M
CPU: 4.637s
CGroup: /system.slice/kibana.service
└─27763 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
Apr 24 15:23:06 elasticsearchlarge systemd[1]: Started Kibana.
That's indeed the one that is being used on my system as well. Anyhow, the content is the same for both so I guess that doesn't really matter.
Looking at your file, I see that the user being used to run the service is 'kibana'. Does that user exist and has it got permissions on /usr/share/kibana/bin/ and /etc/kibana/?
When you start it manually, are you using the kibana user?
I see it's running, but I guess that's only until the next restart? You might want to remove Restart=always from your kibana.service file, then run sudo systemctl daemon-reload and then try starting the service again. Maybe you'll get some more info if it doesn't try to restart immediately after it crashes.
Alternatively, you could add a delay by adding for example RestartSec=60 to the kibana.service file so that systemd will wait for 60 seconds before trying to restart the service. Again, don't forget to run daemon-reload after the modification or you might end up wondering why your changes aren't applied (been there before )
I've changed the user in the kibana.service file to be root (not ideal, I know) to see what would happen. The service starts! So it does appear it's a permissions thing. Are you able to provide a definitive guide on how to set the permissions for the Kibana user?
I installed using the .tar.gz. The elastic user who runs the process owns the entire install folder:
$ cd /elasticsearch/kibana-6.0.0-linux-x86_64/
$ ll
total 856
drwxr-xr-x 2 elastic elastic 4096 Nov 10 19:50 bin
drwxrwxr-x 2 elastic elastic 4096 Feb 21 08:19 config
drwxrwxr-x 3 elastic elastic 4096 Feb 16 12:40 data
-rw-r--r-- 1 root root 0 Dec 7 18:14 kibana.pid
-rw-rw-r-- 1 elastic elastic 562 Nov 10 19:50 LICENSE.txt
drwxrwxr-x 6 elastic elastic 4096 Nov 10 19:50 node
drwxrwxr-x 620 elastic elastic 20480 Nov 10 19:50 node_modules
-rw-rw-r-- 1 elastic elastic 799543 Nov 10 19:50 NOTICE.txt
drwxrwxr-x 3 elastic elastic 4096 Nov 10 19:50 optimize
-rw-rw-r-- 1 elastic elastic 721 Nov 10 19:50 package.json
drwxrwxr-x 3 elastic elastic 4096 Feb 16 12:30 plugins
-rw-rw-r-- 1 elastic elastic 4654 Nov 10 19:50 README.txt
drwxr-xr-x 14 elastic elastic 4096 Nov 10 19:50 src
drwxrwxr-x 5 elastic elastic 4096 Nov 10 19:50 ui_framework
drwxr-xr-x 2 elastic elastic 4096 Nov 10 19:50 webpackShims
I'm not sure what is recommended or required for RPM or APT installations though.
I do recall however that I had similar issues after installing logstash with RPM. If I remember correctly I solved it by recursively changing the ownership of the entire /usr/share/logstash folder to the logstash user. So I'm not sure if it's the ideal approach but doing something similar could probably solve the issue here as well.
Hi Jonas - that's a good shout, unfortunately it didn't help, I tried the equivalent recursively for the kibana user on the /usr/share/kibana. Maybe there's another folder that needs the same treatment
The only other folder I can think of would be /etc/kibana but if I compare with my logstash setup then the /etc/logstash folder is simply owned by root.
What happens when you try to run manually with the kibana user without sudo?
Starting manually using kibana also doesn't work with the same errors - i.e. running:
sudo -H -u kibana bash -c '/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml'
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.