Unable To Access Kibana After Enabling Security

Hello,

I am currently trying to setup detection and monitoring for my self hosted Elasticstack. I have been following the guidelines found in this tutorial: Detections prerequisites and requirements | Elastic Security Solution [7.13] | Elastic

I am able to start Elasticsearch and visit the cluster data by going to https://localhost:9200. When I try and do the same for kibana, https://localhost:5601, I get an "unable to connect" error from Firefox. This led me to believe that Kibana wasn't started but when I run the command, service kibana status I get the following output:

● kibana.service - Kibana
     Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor pres>
     Active: active (running) since Wed 2021-07-21 16:45:37 MDT; 3s ago
       Docs: https://www.elastic.co
   Main PID: 21542 (node)
      Tasks: 14 (limit: 9483)
     Memory: 112.5M
     CGroup: /system.slice/kibana.service
             ├─21542 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/>
             └─21554 /usr/share/kibana/node/bin/node --preserve-symlinks-main >

My .yml files can be seen below:

Elasticsearch.yml:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application

xpack.security.enabled: true

xpack.security.transport.ssl.enabled: false
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: /etc/elasticsearch/http.p12
xpack.security.http.ssl.truststore.path: /etc/elasticsearch/http.p12

discovery.type: single-node
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
network.host: 0.0.0.0
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: [127.0.0.1]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true                                                           

Kibana.yml:

# Kibana is served by a back end server. This setting specifies the port to use.
# server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#sperver.host: 0.0.0.0

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["https://localhost:9200"]

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana_system"
elasticsearch.password: "malware"
elasticsearch.ssl.certificateAuthorities: ["/usr/share/elasticsearch/kibana/elasticsearch-ca.pem"]

server.ssl.keystore.path: "/etc/kibana/kibana-server.p12"
server.ssl.enabled: true

xpack.encryptedSavedObjects.encryptionKey: 'fhjskloppd678ehkdfdlliverpoolfcr'

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

I know this is just a lack of understanding from my part and probably a simple fix but regardless any help would be greatly appreciated. Also more information is needed or I need to answer any questions please let me know.

Thanks,
Jared

Kibana is bound only to localhost (note you have a typo) try but you indicate you are trying to access from localhost so that may not be the issue.

server.host: 0.0.0.0

What do the kibana logs show?

Thank you for the response sephenb really appreciate the time. I am trying to access it from the localhost which is why I have that line commented out but regardless I will fix the typo. In terms of logs here is what I found. After starting Kibana if I use the command: journalctl -u kibana.service
I get this output:

Jul 22 09:44:02 malware-VirtualBox systemd[1]: Started Kibana.

When I check the log files at the path: /var/log/kibana It doesn't seem like the logs are updating at all when I start and stop Kibana. Am I checking the right file? Just in case I am just missing something I will post the logs from this file but it seems like these are logs from a previous day rather than today.

{"type":"log","@timestamp":"2021-07-20T16:29:40-06:00","tags":["info","savedobjects-service"],"pid":3132,"message":"[.kibana_task_manager] INIT -> INIT. took: 30131ms."}
{"type":"log","@timestamp":"2021-07-20T16:32:16-06:00","tags":["warning","environment"],"pid":3684,"process":{"pid":3684,"path":"/run/kibana/kibana.pid"},"message":"pid file already exists at /run/kibana/kibana.pid"}
{"type":"log","@timestamp":"2021-07-20T16:32:34-06:00","tags":["info","plugins-service"],"pid":3684,"message":"Plugin \"timelines\" is disabled."}
{"type":"log","@timestamp":"2021-07-20T16:32:35-06:00","tags":["warning","config","deprecation"],"pid":3684,"message":"\"logging.dest\" has been deprecated and will be removed in 8.0. To set the destination moving forward, you can use the \"console\" appender in your logging configuration or define a custom one. For more details, see https://github.com/elastic/kibana/blob/master/src/core/server/logging/README.mdx"}
{"type":"log","@timestamp":"2021-07-20T16:32:35-06:00","tags":["warning","config","deprecation"],"pid":3684,"message":"plugins.scanDirs is deprecated and is no longer used"}
{"type":"log","@timestamp":"2021-07-20T16:32:35-06:00","tags":["warning","config","deprecation"],"pid":3684,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.\""}
{"type":"log","@timestamp":"2021-07-20T16:32:36-06:00","tags":["info","plugins-system"],"pid":3684,"message":"Setting up [106] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,banners,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,translations,licenseApiGuard,legacyExport,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,home,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,advancedSettings,savedObjects,visualizations,visTypeVislib,visTypeVega,visTypeTimelion,features,licenseManagement,watcher,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeMarkdown,visTypeXy,tileMap,regionMap,presentationUtil,canvas,graph,timelion,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,inputControlVis,indexPatternManagement,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,lens,reporting,lists,encryptedSavedObjects,dataEnhanced,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,beatsManagement,transform,ingestPipelines,fileUpload,maps,fileDataVisualizer,eventLog,actions,alerting,triggersActionsUi,stackAlerts,ruleRegistry,observability,osquery,ml,securitySolution,cases,infra,monitoring,logstash,apm,uptime]"}
{"type":"log","@timestamp":"2021-07-20T16:32:36-06:00","tags":["info","plugins","taskManager"],"pid":3684,"message":"TaskManager is identified by the Kibana UUID: 850252d9-55cf-4f28-b477-c87e4aef67b4"}
{"type":"log","@timestamp":"2021-07-20T16:32:38-06:00","tags":["warning","plugins","security","config"],"pid":3684,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-07-20T16:32:38-06:00","tags":["warning","plugins","security","config"],"pid":3684,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","@timestamp":"2021-07-20T16:32:38-06:00","tags":["warning","plugins","reporting","config"],"pid":3684,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-07-20T16:32:38-06:00","tags":["info","plugins","reporting","config"],"pid":3684,"message":"Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox."}
{"type":"log","@timestamp":"2021-07-20T16:32:38-06:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":3684,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-07-20T16:32:38-06:00","tags":["warning","plugins","actions","actions"],"pid":3684,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-07-20T16:32:38-06:00","tags":["warning","plugins","alerting","plugins","alerting"],"pid":3684,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-07-20T16:32:41-06:00","tags":["info","plugins","monitoring","monitoring"],"pid":3684,"message":"config sourced from: production cluster"}
{"type":"log","@timestamp":"2021-07-20T16:32:42-06:00","tags":["info","savedobjects-service"],"pid":3684,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","@timestamp":"2021-07-20T16:32:44-06:00","tags":["error","savedobjects-service"],"pid":3684,"message":"Unable to retrieve version information from Elasticsearch nodes."}
{"type":"log","@timestamp":"2021-07-20T16:43:04-06:00","tags":["info","plugins-system"],"pid":3684,"message":"Stopping all plugins."}
{"type":"log","@timestamp":"2021-07-20T16:43:04-06:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":3684,"message":"Monitoring stats collection is stopped"}
{"type":"log","@timestamp":"2021-07-20T16:43:34-06:00","tags":["warning","plugins-system"],"pid":3684,"message":"\"eventLog\" plugin didn't stop in 30sec., move on to the next."}

Thanks again for the help,
Jared

Yup those logs look dated... and I was looking for the last lines when you tried to access Kibana..

What do you get when you run this from the command line

curl https://localhost:5601/api/spaces/space

I may be missing something but when I checked the log files those were the last lines of it. There doesn't seem to be any up to date logs in that specific file.

When I curl that I get the following response:

curl: (7) Failed to connect to localhost port 5601: Connection refused

This response made me assume that Kibana wasn't running at all but again when I run the command: service kibana status I get the following output:

malware@malware-VirtualBox:~$ service kibana status
● kibana.service - Kibana
     Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor pres>
     Active: active (running) since Thu 2021-07-22 10:08:46 MDT; 955ms ago
       Docs: https://www.elastic.co
   Main PID: 3900 (node)
      Tasks: 14 (limit: 9483)
     Memory: 46.4M
     CGroup: /system.slice/kibana.service
             ├─3900 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/b>
             └─3912 /usr/share/kibana/node/bin/node --preserve-symlinks-main ->

I don't know if this is important but I'll mention it anyway. When I first started setup of my Elasticstack I was running Kibana 7.13.2 using a tar.gz package. I am now running Kibana 7.13.3 using the Ubuntu package. My elasticsearch is still 7.13.2. Could this be the problem?

The version may not be an issues but this is the order of upgrade version although Kibana should not be "Ahead" of elastic See Here

Those logs are probably from the tar.gz install... WHICH might be easier method to debug ( for a novice) ... then change over to package...

To me looks like kibana is not actually running on that port.

lsof -i :5601

if you wait on the journalctl -u kibana.service do you get anymore logs?

For you... perhaps I would try the tar.gz first... get it working then try the package.

And of course it could always be a cert issue :slight_smile:

Ok now we can get somewhere. I just checked the journal and when it said Kibana was started I thought it was good but like you stated there are more logs that show up afterward. Interesting that the service status would show it was active. Here they are:

Jul 22 10:31:17 malware-VirtualBox kibana[4221]:  FATAL  Error: EACCES: permission denied, open '/etc/kibana/kibana-server.p12'
Jul 22 10:31:17 malware-VirtualBox systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Jul 22 10:31:17 malware-VirtualBox systemd[1]: kibana.service: Failed with result 'exit-code'.
Jul 22 10:31:20 malware-VirtualBox systemd[1]: kibana.service: Scheduled restart job, restart counter is at 6.
Jul 22 10:31:20 malware-VirtualBox systemd[1]: Stopped Kibana.
Jul 22 10:31:20 malware-VirtualBox systemd[1]: kibana.service: Start request repeated too quickly.
Jul 22 10:31:20 malware-VirtualBox systemd[1]: kibana.service: Failed with result 'exit-code'.
Jul 22 10:31:20 malware-VirtualBox systemd[1]: Failed to start Kibana.

What would be causing a error like this? I have checked the keystore and to my knowledge the password is correct? I also have the file in the Kibana folder.

Thanks a ton for sticking with me,
Jared

Jul 22 10:31:17 malware-VirtualBox kibana[4221]: FATAL Error: EACCES: permission denied, open '/etc/kibana/kibana-server.p12'

Remember when start as a service kibana is running as the kibana user so I suspect the kibana user does not have access to that file

What does this show
ls -l /etc/kibana/kibana-server.p12

This is the response I get:

-rw------- 1 root kibana 3654 Jul 21 16:38 /etc/kibana/kibana-server.p12

Is it an issue with users in Ubuntu or users in Kibana? Just trying to understand so that a problem similar doesn't come up.

Edit: After re-reading your above post I understand now and this question can be omitted. The result of the command makes me think that the correct user is set as Kibana?

nope... that says only root user can read.

try..

sudo chmod 644 /etc/kibana/kibana-server.p1

After running that command I get this:

-rw-rw-r-- 1 root kibana 3654 Jul 21 16:38 /etc/kibana/kibana-server.p12

Edit: Could you explain what the command does? Did it do what was intended? It looks like root is still who can access it but I don't really understand the output coming from that command.

I had a typo should have been 644 (see above) but 664 is OK to... but 644 is better

any other files in the /etc/kibana you added need to have the correct permissions as well.

Now try to start kibana again...

This is basic Unix permissions stuff, you probably need to read up on that...if you are going to work in Unix. Its core to files and file permission.

if you run with the 644 it means which means the User can Read / Write, Group can Read, Word can Read... when you had the original it meant the Kibana User / Group could not read.

-rw-r-r-- 1 root kibana 3654 Jul 21 16:38 /etc/kibana/kibana-server.p12

Thank a lot for the help! I have run into another issue but am going to try and troubleshoot it before I ask.

Now I am able to get to the website but it is stuck on "Kibana server is not ready yet". I am assuming that means my setup is too slow and it's just taking a long time (More than 10 minutes) or it is because the versions aren't lining up. I just wanted to ask. What would you recommend for a production elastic stack when it comes to updating to the latest version? What is the best way to upgrade without losing data and having the easiest time. I am using the Unbuntu packages for all parts of my stack.

There are many topics on "Kibana server is not ready yet" it is usually a config issue not long time loading...

There is a lot of free training available on the elastic website perhaps take advantage.

With respect to upgrading etc... I provided the link above.

Really the best experience is Elastic Cloud (Hosted Elasticsearch)

Otherwise there are many approaches depending on your skill set

Bare Metal, Docker, Kubernetes (ECK)

Hey Stephenb having our stack hosted on the cloud isn't an option for us. Would you be able to help me debug the issues with the error "Kibana server is not ready yet". You say it's a config issue so should I be reviewing my .yml and see if anything seems out of the ordinary? What steps would you recommend taking?

Please open another thread on the "Kibana server is not ready yet"
Provide the Config and the logs, perhaps there are others that can help as well.
(I am just a volunteer with a BIG day job :slight_smile: )

Did you search on it?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.