Kibana login page (Access control)

Hello there,
I have Elasticsearch (Version 1.30.0) and Kibana (Version 3.2.3) on Rancher. I am trying to enable the login page for security purposes. I have added the x-pack configs in Kibana. Also I tied enabling ( true) in both elasticsearch and Kibana, but it wont come up after adding. I also tried elasticsearch.username: "kibana", elasticsearch.password: "kibanapassword" in Kibana.yaml but it seems it doesn't pick it up as if I put random entry still comes up. Can you please let me know what I am missing?


I believe you will have to setup password for system users first. Which then will enforce security on you Kibana login and enable you to configure more user roles etc as well going forward -

Command - "elasticsearch-setup-passwords interactive"
Location - elastic bin directory

Use this link to get started -


1 Like

None of thise version of Kibana or Elasticsearch exists, so please have a closer look at exactly what you are using. Without knowing the version it is hard to help with this.

1 Like

I have them on Rancher and that is the version it is showing, you can find it in the attachment

Also I think here is the correct version: Elastic Search: Elasticsearch 6.0. Kibana version 7.

Elasticsearch and Kibana need to be of the same version. That combination will not work.

Both of them are 6.7.0. Can you please tell me now how to do it? Thanks

Security is only part of the free basic license from version 6.8 and 7.1 so 6.7 will not have this. You also need to ensure you are using the default distribution and not the oss one.

I have them on Rancher, any version (7.1.1 and 7.2.0) that I try fails for Elasticsearch. I can somehow update the Kibana but Elasticsearch fails. any idea please?

What is the output if you go to Elasticsearch_node:9200 ? I suspect you are using the OSS distribution.

Yes I am using oss:
I tried from Apps, Elasticsearch upgrade didnt work. Below is by upgrading workloads, Elasticsearch-client which failed as well.
In [sysctl] log: vm.max_map_count = 262144 (shows terminated)
in elastic search log: (shows notready)

7/18/2019 1:31:37 PM "at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout( [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:37 PM "at org.elasticsearch.cluster.service.ClusterApplierService$ [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:37 PM "at org.elasticsearch.common.util.concurrent.ThreadContext$ [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:37 PM "at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:?]",

7/18/2019 1:31:37 PM "at java.util.concurrent.ThreadPoolExecutor$ [?:?]",

7/18/2019 1:31:37 PM "at [?:?]"] }

7/18/2019 1:31:41 PM {"type": "server", "timestamp": "2019-07-18T17:31:41,405+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "": "elasticsearch", "": "elasticsearch-client-6d6ffbcc4c-4lt6d", "message": "master not discovered yet: have discovered ; discovery will continue using [,,] from hosts providers and [{elasticsearch-client-6d6ffbcc4c-4lt6d}{IBxQXRF5QIqCFi7ftdU9Qg}{OHXriPA5RD6YE0rV6tUXIg}{}{}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }

7/18/2019 1:31:47 PM {"type": "server", "timestamp": "2019-07-18T17:31:47,299+0000", "level": "DEBUG", "component": "o.e.a.a.c.h.TransportClusterHealthAction", "": "elasticsearch", "": "elasticsearch-client-6d6ffbcc4c-4lt6d", "message": "no known master node, scheduling a retry" }

7/18/2019 1:31:47 PM {"type": "server", "timestamp": "2019-07-18T17:31:47,299+0000", "level": "DEBUG", "component": "o.e.a.a.c.h.TransportClusterHealthAction", "": "elasticsearch", "": "elasticsearch-client-6d6ffbcc4c-4lt6d", "message": "timed out while retrying [cluster:monitor/health] after failure (timeout [30s])" }

7/18/2019 1:31:47 PM {"type": "server", "timestamp": "2019-07-18T17:31:47,300+0000", "level": "WARN", "component": "r.suppressed", "": "elasticsearch", "": "elasticsearch-client-6d6ffbcc4c-4lt6d", "message": "path: /_cluster/health, params: {}" ,

7/18/2019 1:31:47 PM "stacktrace": ["org.elasticsearch.discovery.MasterNotDiscoveredException: null",

7/18/2019 1:31:47 PM "at$AsyncSingleAction$4.onTimeout( [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:47 PM "at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout( [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:47 PM "at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout( [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:47 PM "at org.elasticsearch.cluster.service.ClusterApplierService$ [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:47 PM "at org.elasticsearch.common.util.concurrent.ThreadContext$ [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:47 PM "at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:?]",

7/18/2019 1:31:47 PM "at java.util.concurrent.ThreadPoolExecutor$ [?:?]",

7/18/2019 1:31:47 PM "at [?:?]"] }

7/18/2019 1:31:51 PM {"type": "server", "timestamp": "2019-07-18T17:31:51,405+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "": "elasticsearch", "": "elasticsearch-client-6d6ffbcc4c-4lt6d", "message": "master not discovered yet: have discovered ; discovery will continue using [,,] from hosts providers and [{elasticsearch-client-6d6ffbcc4c-4lt6d}{IBxQXRF5QIqCFi7ftdU9Qg}{OHXriPA5RD6YE0rV6tUXIg}{}{}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }

7/18/2019 1:31:57 PM {"type": "server", "timestamp": "2019-07-18T17:31:57,299+0000", "level": "DEBUG", "component": "o.e.a.a.c.h.TransportClusterHealthAction", "": "elasticsearch", "": "elasticsearch-client-6d6ffbcc4c-4lt6d", "message": "no known master node, scheduling a retry" }

7/18/2019 1:31:57 PM {"type": "server", "timestamp": "2019-07-18T17:31:57,302+0000", "level": "DEBUG", "component": "o.e.a.a.c.h.TransportClusterHealthAction", "": "elasticsearch", "": "elasticsearch-client-6d6ffbcc4c-4lt6d", "message": "timed out while retrying [cluster:monitor/health] after failure (timeout [30s])" }

7/18/2019 1:31:57 PM {"type": "server", "timestamp": "2019-07-18T17:31:57,302+0000", "level": "WARN", "component": "r.suppressed", "": "elasticsearch", "": "elasticsearch-client-6d6ffbcc4c-4lt6d", "message": "path: /_cluster/health, params: {}" ,

7/18/2019 1:31:57 PM "stacktrace": ["org.elasticsearch.discovery.MasterNotDiscoveredException: null",

7/18/2019 1:31:57 PM "at$AsyncSingleAction$4.onTimeout( [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:57 PM "at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout( [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:57 PM "at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout( [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:57 PM "at org.elasticsearch.cluster.service.ClusterApplierService$ [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:57 PM "at org.elasticsearch.common.util.concurrent.ThreadContext$ [elasticsearch-7.1.1.jar:7.1.1]",

7/18/2019 1:31:57 PM "at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:?]",

7/18/2019 1:31:57 PM "at java.util.concurrent.ThreadPoolExecutor$ [?:?]",

7/18/2019 1:31:57 PM "at [?:?]"] }

Also I tried clonning, this time by passing the appVersion and image.tag. But get below error now:


You need to use the default distribution.