Login is current disabled under Kibana UI

Hello,

I am using https://github.com/elastic/stack-docker repository. While following doc under README.md, I cloned the Repository, used docker-compose up and services looks good.

While I checked the logs:

root@ubuntu:~# docker logs f9d8
..
[2018-02-12T17:37:46,137][INFO ][o.e.n.Node               ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config]
[2018-02-12T17:37:48,004][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [aggs-matrix-stats]


[2018-02-12T17:37:50,906][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/121] [Main.cc@128] controller (64 bit): Version 6.1.3 (Build 49803b19919585) Copyright (c) 2018 Elasticsearch BV
[2018-02-12T17:37:51,461][INFO ][o.e.d.DiscoveryModule    ] [jrXdhxD] using discovery type [zen]
[2018-02-12T17:37:53,499][INFO ][o.e.n.Node               ] initialized
[2018-02-12T17:37:53,501][INFO ][o.e.n.Node               ] [jrXdhxD] starting ...
[2018-02-12T17:37:54,071][INFO ][o.e.t.TransportService   ] [jrXdhxD] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2018-02-12T17:37:54,132][WARN ][o.e.b.BootstrapChecks    ] [jrXdhxD] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2018-02-12T17:37:57,203][INFO ][o.e.c.s.MasterService    ] [jrXdhxD] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {jrXdhxD}{jrXdhxDiT8GKtKltsG6Fng}{CrSPn47TR-uBXotjRqhofQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=12598865920, ml.max_open_jobs=20, ml.enabled=true}
[2018-02-12T17:37:57,215][INFO ][o.e.c.s.ClusterApplierService] [jrXdhxD] new_master {jrXdhxD}{jrXdhxDiT8GKtKltsG6Fng}{CrSPn47TR-uBXotjRqhofQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=12598865920, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {jrXdhxD}{jrXdhxDiT8GKtKltsG6Fng}{CrSPn47TR-uBXotjRqhofQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=12598865920, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-02-12T17:37:57,268][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [jrXdhxD] publish_address {100.98.26.129:9200}, bound_addresses {[::]:9200}
[2018-02-12T17:37:57,269][INFO ][o.e.n.Node               ] [jrXdhxD] started
[2018-02-12T17:37:57,491][INFO ][o.e.g.GatewayService     ] [jrXdhxD] recovered [0] indices into cluster_state
[2018-02-12T17:37:58,990][INFO ][o.e.l.LicenseService     ] [jrXdhxD] license [5c7e79b3-d1e9-4f07-9793-ea3f20ae698e] mode [trial] - valid
[2018-02-12T17:38:04,015][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.monitoring-es-6-2018.02.12] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[1], mappings [doc]
[2018-02-12T17:38:04,482][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.watches] creating index, cause [auto(bulk api)], templates [.watches], shards [1]/[1], mappings [doc]
[2018-02-12T17:38:05,030][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watches/3a2eYZLURqWSE8If4ojDQg] update_mapping [doc]
[2018-02-12T17:39:05,388][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.triggered_watches] creating index, cause [auto(bulk api)], templates [.triggered_watches], shards [1]/[1], mappings [doc]
[2018-02-12T17:39:05,815][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.monitoring-alerts-6] creating index, cause [auto(bulk api)], templates [.monitoring-alerts], shards [1]/[1], mappings [doc]
[2018-02-12T17:39:05,897][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.watcher-history-7-2018.02.12] creating index, cause [auto(bulk api)], templates [.watch-history-7], shards [1]/[1], mappings [doc]
[2018-02-12T17:39:06,195][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.12/mxj9Ak5aQmqvoGPks67j-A] update_mapping [doc]
[2018-02-12T17:39:06,283][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.12/mxj9Ak5aQmqvoGPks67j-A] update_mapping [doc]
[2018-02-13T00:00:04,457][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.monitoring-es-6-2018.02.13] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[1], mappings [doc]
[2018-02-13T00:00:19,346][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.watcher-history-7-2018.02.13] creating index, cause [auto(bulk api)], templates [.watch-history-7], shards [1]/[1], mappings [doc]
[2018-02-13T00:00:19,451][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.13/y1cnVrseR3atoRioU811gw] update_mapping [doc]
[2018-02-13T00:00:30,855][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.13/y1cnVrseR3atoRioU811gw] update_mapping [doc]
[2018-02-13T01:38:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] triggering scheduled [ML] maintenance tasks
[2018-02-13T01:38:00,004][INFO ][o.e.x.m.a.DeleteExpiredDataAction$TransportAction] [jrXdhxD] Deleting expired data
[2018-02-13T01:38:00,081][INFO ][o.e.x.m.a.DeleteExpiredDataAction$TransportAction] [jrXdhxD] Completed deletion of expired data
[2018-02-13T01:38:00,083][INFO ][o.e.x.m.MlDailyMaintenanceService] Successfully completed [ML] maintenance tasks
curl -XPUT 'localhost:9200/idx'
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication token for REST request [/idx]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication token for REST request [/idx]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}root@ubuntu:~/openusm/logging#

I added the below entry for taking care of RAM:


version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-platinum:${TAG}
    container_name: elasticsearch
    network_mode: host
    environment: ['http.host=0.0.0.0', 'transport.host=127.0.0.1', 'ELASTIC_PASSWORD=${ELASTIC_PASSWORD}']
    environment:
      ES_JAVA_OPTS: "-Xmx4g -Xms4g"

    ports: ['127.0.0.1:9200:9200']

Looks to me something to do with Xpack. I want to use Machine Learning and hence platinum image should be good enough.

I checked .env file under the same directory and it does show me "changeme" as ELASTIC_PASSWORD. I changed it new password but still the issue persist.

Please format your code using </> icon as explained in this guide and not the citation button. It will make your post more readable.

Or use markdown style like:

```
CODE
```

Please edit your post.

I wonder if one of the environment entry overwrite the previous one.

I don't understand what problem you're trying to solve.

If security is enabled (and it is) then you need to pass the username to curl. e.g.

curl -uelastic -XPUT 'localhost:9200/idx'

Is that all you're looking for? You've posted a lot of logs, but it's really not clear what you need assistance with.

I have added the logs under markdown style.

All I want is to run Elastic Stack using Docker via docker-compose.

As I check the services, they are up and running successfully. but while I try to login to kibana, it says Couldnt login to Kibana UI

 curl http://100.98.26.181:9200
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication token for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication token for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}root@ubuntu:~/stack-docker# ^C
root@ubuntu:~/stack-docker#

My .env file has ELASTIC_PASSWORD in it.

I tried :slight_smile:

curl -uelastic -XPUT 'localhost:9200/idx'Enter host password for user 'elastic':
{"error":{"root_cause":[{"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

Why is it throwing this error?

@dadoonet I tried putting the environment variable as shown below:

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-platinum:${TAG}
    container_name: elasticsearch
    network_mode: host
    environment: ['http.host=100.98.26.181', 'transport.host=100.98.26.181', 'ELASTIC_PASSWORD=${ELASTIC_PASSWORD}', 'ES_JAVA_OPTS=${ES_JAVA_OPTS}']
    ports: ['100.98.26.181:9200:9200']

kibana:
    image: docker.elastic.co/kibana/kibana:${TAG}
    container_name: kibana
    environment:
      - ELASTICSEARCH_USERNAME=kibana
      - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD}
    network_mode: host
    ports: ['100.98.26.181:5601:5601']
    depends_on: ['elasticsearch']

But still when I try:

root@ubuntu:~/stack-docker# curl -ukibana -XPUT 'localhost:9200/idx'
Enter host password for user 'kibana':
{"error":{"root_cause":[{"type":"security_exception","reason":"failed to authenticate user [kibana]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"failed to authenticate user [kibana]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}root@ubuntu:~/stack-docker#

Am I missing anything?

I'm unsure about this:

    environment: ['http.host=0.0.0.0', 'transport.host=127.0.0.1', 'ELASTIC_PASSWORD=${ELASTIC_PASSWORD}']
    environment:
      ES_JAVA_OPTS: "-Xmx4g -Xms4g"

I'd try instead:

    environment: ['http.host=0.0.0.0', 'transport.host=127.0.0.1', 'ELASTIC_PASSWORD=${ELASTIC_PASSWORD}', 'ES_JAVA_OPTS: "-Xmx4g -Xms4g""]

Or the other way around.

@dadoonet I added the entry as you suggested.

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-platinum:${TAG}
    container_name: elasticsearch
    network_mode: host
    environment: ['http.host=100.98.26.181', 'transport.host=100.98.26.181', 'ELASTIC_PASSWORD=${ELASTIC_PASSWORD}', 'ES_JAVA_OPTS="-Xmx4g -Xms4g"']
    ports: ['100.98.26.181:9200:9200']

  kibana:
    image: docker.elastic.co/kibana/kibana:${TAG}
    container_name: kibana
    environment:
      - ELASTICSEARCH_USERNAME=kibana
      - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD}
    network_mode: host

Restarted my docker-compose:

docker-compose restart
Restarting metricbeat       ... done
Restarting apm_server       ... done
Restarting filebeat         ... done
Restarting heartbeat        ... done
Restarting setup_packetbeat ... done
Restarting setup_metricbeat ... done
Restarting setup_filebeat   ... done
Restarting setup_auditbeat  ... done
Restarting setup_apm_server ... done
Restarting setup_heartbeat  ... done
Restarting logstash         ... done
Restarting auditbeat        ... done
Restarting setup_kibana     ... done
Restarting packetbeat       ... done
Restarting kibana           ... done
Restarting setup_logstash   ... done
Restarting elasticsearch    ... done
root@ubuntu:~/stack-docker#

But the same issue:

root@ubuntu:~/stack-docker# curl -ukibana -XPUT 'localhost:9200/idx'
Enter host password for user 'kibana':
{"error":{"root_cause":[{"type":"security_exception","reason":"failed to authenticate user [kibana]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"failed to authenticate user [kibana]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}root@ubuntu:~/stack-docker#

Is kibana as user the right username? I am just looking at these lines:

kibana:
    image: docker.elastic.co/kibana/kibana:${TAG}
    container_name: kibana
    environment:
      - ELASTICSEARCH_USERNAME=kibana
      - ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD}
    network_mode: host

FYI here is the docker-compose file I'm using on my end:

---
version: '3'
services:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-platinum:$ELASTIC_VERSION
    environment:
      - bootstrap.memory_lock=true
      - ELASTIC_PASSWORD=changeme
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms2g -Xmx2g"
      - cluster.routing.allocation.disk.threshold_enabled=false
      - xpack.security.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - 9200:9200
    networks: ['stack']

  kibana:
    image: docker.elastic.co/kibana/kibana:$ELASTIC_VERSION
    links:
      - elasticsearch
    ports:
      - 5601:5601
    networks: ['stack']
    depends_on:
      - elasticsearch

networks:
  stack: {}

We seem to running into a XY problem here.

The problem we need to focus on is why Kibana can't connect to Elasticsearch. All the other things you're trying to do might give us infomation to help solve that, but they aren't the problem - the problem is why can't Kibana connect to ES.

Note: I also don't recommend stack-docker as the simplest way to get up and running. Yes, it's nice to have a single docker-compose that sets everything up, but it's also hard to diagnose and debug, especially if you're not well versed in all the underlying tools.
It's not one of the recommended installations methods in the documentation, and while it should work, there's not a lot of experts to help sort out problems.
Feel free to keep using it, and we'll keep trying to help, but it's not what we recommend you start with when trying out the stack.

The problem here seems to be the value of ELASTIC_PASSWORD.
What we need to do as a first step is get this to work:

curl -uelastic:${ELASTIC_PASSWORD} "localhost:9200/"

nothing else is going to work out until that basic piece is working.

Environment handling in docker-compose can produce surprising results if you have the same variable defined in multiple places. Is it possible that your shell has a value for ELASTIC_PASSWORD ? That would take precedence over the .env file.

curl -uelastic:test123 "100.98.26.181:9200/"
{"error":{"root_cause":[{"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

May I know how does one setup username and password?
I mean how does this authenticate to? All I have been putting .env file where I pass ELASTIC_PASSWORD variable with password and when I try passing it drectly as shown above, it just fails to connect with 401 error.

@dadoonet I will try your entries and see if that works.

One question - I need Machine Learning feature under Elastic Stack. Will the below entry be required:

xpack.security.enabled=true

I am okay to try out 30-days trial license. May I know what entry I would need?

This is how my docker-compose file look like:

https://pastebin.com/xL2B6DTT

Ooops. Sounds like I disabled security in this docker compose file so the password is not used... Bad example I shared with you. Sorry.

Anyway.

I need Machine Learning feature under Elastic Stack. Will the below entry be required:

You don't need security to test machine learning.

@dadoonet Can you try my docker-compose once and see how it works in your environment? Except network_mode: host, my docker-compose look like yours but still it fails to curl command.

curl -uelastic:test123 "100.98.26.181:9200/"
curl: (7) Failed to connect to 100.98.26.181 port 9200: Connection refused

heartbeat           | 2018/02/13 09:22:22.351867 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:22:27.351891 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:22:37.351799 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:22:42.351878 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:22:47.351790 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:22:57.352246 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:23:02.351880 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:23:07.351897 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:23:17.351861 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:23:22.351868 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:23:27.351894 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:23:37.351968 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:23:42.351948 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:23:47.351890 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:23:57.351885 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:24:02.351779 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:24:07.351846 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:24:17.351863 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:24:22.351802 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:24:22.360727 output.go:74: ERR Failed to connect: Get http://elasticsearch:9200: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
heartbeat           | 2018/02/13 09:24:27.351916 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:24:37.351662 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:24:42.351796 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:24:47.351794 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:24:57.351919 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
heartbeat           | 2018/02/13 09:25:02.351879 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.

But I do need xpack for Machine Learning right? What is the minimal config I would need to make Machine Learning working?

The below docker-compose works for me BTW

version: '2'

services:

  elasticsearch:
    build:
      context: elasticsearch/
    volumes:
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk

  logstash:
    build:
      context: logstash/
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro
    ports:
      - "5000:5000"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    build:
      context: kibana/
    volumes:
      - ./kibana/config/:/usr/share/kibana/config:ro
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

networks:

  elk:
    driver: bridge

Our docker images have special steps in them to setup the passwords.

For a regular install of Elasticsearch and Kibana the steps are described here:

Essentially, there is a file (the keystore) inside the Elasticsearch configuration directory that contains an initial password for the elastic user. By default that is randomly generated, but it's possible to override it.
Then, using that initial password, you run a setup script to configure all the builtin passwords.

For the docker images, there's some magic going on to try and make it work in a way that fits into the wider docker ecosystem.
The ELASTIC_PASSWORD environment variable is used to set that initial password in the keystore as part of the entry point script that is configured in the Dockerfile.

For most docker use cases, we recommend that you change this password via the API after you start your container, so the ELASTIC_PASSWORD is no longer a valid authentication credential.

However, the stack-docker setup isn't really intended as a production use case, so it uses that same ELASTIC_PASSWORD for all users. There are scripts that use the X-Pack API to set those passwords.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.