We've recently upgraded our Kibana and ES from 6.3 to 7.5.1
Currently, in our dev environment, we have several Kibana instances running against the same ES cluster (2 nodes).
The initial Kibana user was created using CURL. This user has all privileges and access (kibana_user role). Using this user I am able to log-in to Kibana and work with it without any issues. Creating a user from the Kibana User Management page is also working as expected. I see that the user creates successfully without any errors. But, when I logout and try to login with the new user, I get the following error on the screen:
"Multiple versions of Kibana are running against the same Elasticsearch cluster, unable to authorize user"
I checked Kibana and ES logs, but couldn't find anything interesting.
I made sure that each Kibana instance has a unique server name in its kibana.yml
Also, all Kibana instances are using the same elasticsearch.username and elasticsearch password in their kibana.yml file.
I think it is important to mention that we migrated Indices from ES 6.3 cluster to ES 7.5.1 cluster using remote-reindex mechanism and only after that we applied the Security feature.
Actually, I don't have xpack.security.encryptionKey configured at all. As for the other configurations, I've double-checked it. Everything seems to be configured correctly. I can say that now I am able to reproduce the issue constantly. To the ES cluster (2 nodes), we have 4 Kibana instances connected which were installed using snapshot. In addition, we have local development environments of Kibana which uses Kibana's repo as it is. Whenever I start one of the local Kibana environments, the issue starts to reproduce.
If I stop it and then restart all Snapshot Kibana instances, the issue doesn't occur at all.
Below is my kibana.dev.yml which is used in the local Kibana environment:
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 56077
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: '127.0.0.1'
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
server.name: 'peter'
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ['http://<es-ip>:9200']
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "home"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: 'kibana'
elasticsearch.password: '*****'
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false
# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
logging.verbose: true
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
#disable newsfeed in kibana
newsfeed.enabled: false
#XPack related settings
xpack.graph.enabled: false
xpack.logstash.enabled: false
xpack.infra.enabled: false
xpack.ml.enabled: false
xpack.siem.enabled: false
xpack.uptime.enabled: false
#xpack.monitoring.enabled: false
##session timeout 1800000 ms (30 minutes). Kibana 8 -> xpack.security.session.idleTimeout
#xpack.security.sessionTimeout: 1800000
The error I see is: Multiple versions of Kibana are running against the same Elasticsearch cluster, unable to authorize user
Not sure it is related to running multiple instances of Kibana against same ES cluster.
IMPORTANT UPDATE
I was able to overcome the issue by changing kibana index from .kibana to .kibana_devlocal in kibana.dev.yml
So Kibana snapshot instances uses .kibana index and dev envs use .kibana_devlocal
Although it resolves my issue, I still need to clone .kibana index once in a while.
I would appreciate any idea about the root cause.
I think setting a xpack.security.encryptionKey would definitely help for authorization checks done between Kibana instances. It's strange that changing the .kibana alias pointing to something else solved the issue. Do you have other Kibana environments using the same Elasticsearch cluster?
Thanks for your reply.
I will try your suggestion.
So we have 5 Kibana instances on the same host (each instance has its own port of course)
These instances were built as snapshots.
In addition, we have 3 Kibana local dev environments (localhost). These environments were built and started in dev mode (using yarn kbn bootstrap and yarn start) as stated in Kibana development guide
These 8 instances are running against same ES cluster (2 nodes)
I have set the xpack.security.encryptionKey with the same value for all instances (local and snapshots) mentioned above. The value is 32 characters string.
I've also reverted the kibana.dev.yml of the 3 local instances to use .kibana index instead of .kibana_localdev
Unfortunately, the issue reproduced again.
It is great that I have at least a workaround by changing the kibana index for the 3 local dev environments, but this requires us to update manually the .kibana_localdev content with the new content of .kibana index once in a while.
This is problematic. So, I would really appreciate if you can help me in at least one of the issues/challenges stated above.
Is it possible you are running different versions of Kibana on the same Elasticsearch cluster? You would be getting that error because this constraint started being enforced in 6.4 but wasn't supported beforehand.
If you really need this, you'd have to do as mentioned and change kibana.yml to use different indices for:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.