Kibana port 5601 connection refused: kibana keeps restarting

I've tried the suggestions that several have posted for this related issue. I am running version 6.2.3 on linux. When I first bring up Kibana using the systemctl command and the /etc/systemd/system/kibana.service file, Kibana appears to be gobbling up a lot of CPU, and restarting (getting a new PID). Thus when I try to connect I get connection refused. I see no errors in any kibana related log. If I manually bring up kibana using the same command from the service file, then all is well. Then later if I kill this manually brought up process and start kibana with the systemctl command I am able to connect. I'm running Kibana as root, but I've also played around with running as user=kibana, I've tried various settings in the kibana.yml file as well, using different values for server.host. I'm stumped as to what is going on the first time I start kibana with the systemctl command.

Currently kibana.yml looks like:

server.host: "0.0.0.0"
server.port: 5601
elasticsearch.url: "http://localhost:9200"
kibana.index: ".kibana"
kibana.defaultAppId: "dashboard/fab81920-270c-11e8-a0e8-19e1f61eef50"
logging.dest: /var/log/cazena/kibana/kibana.log
logging.verbose: true

The service file looks like:

[Unit]
Description=Kibana
 
[Service]
User=root
Group=root
EnvironmentFile=-/etc/default/kibana
EnvironmentFile=-/etc/sysconfig/kibana
ExecStart=/usr/share/kibana/bin/kibana "-c /etc/kibana/kibana.yml" --verbose -l "/var/log/cazena/kibana/kibana.log"
Restart=always
WorkingDirectory=/

[Install]
WantedBy=multi-user.target

I believe I've figured this out.

The version of the service file I posted above was one I had changed multiple times while debugging this and is missing a parameter we add during system configuration time (so I actually posted an incorrect copy of the file when the problem was happening).

The parameter is:

MemoryLimit=800M

and it was actually this parameter that was causing kibana to restart. Apparently it takes quite a bit of memory at start-up time for a first time configuration and it was running out of memory and constantly restarting; thus the failure to connect. When I start the system manually, there's no memory limit which is why it worked.

Ultimately I was able to run Kibana as user/group kibana (running as root vs kibana had no bearing on the issue).

Glad you figured it out and re-posted here with what you found . Its useful for the community.

Thanks
Rashmi

I am having the same issue, except no matter how I start Kibana it just continues to restart. Here is a copy of my kibana.yml:

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "10.99.99.100"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name. This is used for display purposes.
server.name: "dulsbx002-esClient01"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "https://dulsbx002-esclient01:9200"

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"

# The default application to load.
kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "elastic"
elasticsearch.password: "1qaz2wsx!QAZ@WSX"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
server.ssl.enabled: true
server.ssl.certificate: ./dulsbx002-esclient01.crt
server.ssl.key: ./dulsbx002-esclient01.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
elasticsearch.ssl.certificate: ./dulsbx002-esclient01.crt
elasticsearch.ssl.key: ./dulsbx002-esclient01.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
elasticsearch.ssl.certificateAuthorities: [ "./ca.crt" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
elasticsearch.ssl.verificationMode: certificate

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send no client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
logging.dest: /var/log/kibana/kibana.log

# Set the value of this setting to true to suppress all logging output.
logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
logging.verbose: true

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
ops.interval: 5000

# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"
'

A couple of things to check with your installation. I had these issues and it may have to do with the complex way we build and update our VM's. You don't specify which version of Kibana you are using, but my comments pertain to 6.2.3 which I recently upgraded to from a very old version of Kibana (4.4.2).

  1. Verify that the 'kibana' user/group was created by the yum install or update. I was finding that at least on some of our systems, the yum command was not reliably creating the user/group.
  2. Verify that the /usr/share/kibana/optimize and /usr/share/kibana/plugins have a user/group of 'kibana'

Thank you for the reply. Kibana was working until I added the SSL connections. Elasticsearch works with the SSLs and Kibana is installed on my client servers. I hope that narrows down what might be wrong.
I did check for the user and groups and they are there.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.