Cannot connect elasticsearch cluster with security

Hello, I have a docker-compose cluster of elasticsearch nodes that I have created with security enabled, following this how-to

now I want to bind it to kibana. unfortunately, the port 5601 is quiet and I don't know why. here is my setup:

The ES cluster is up and responds with certificates

root@saigon:~/elk: curl --cacert certs/ca/ca.crt -u elastic:PASSWD https://localhost:9200
{
  "name" : "es01",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "ioYAncc6Tz2DcgSJodiBIA",
  "version" : {
    "number" : "6.2.4",
    "build_hash" : "ccec39f",
    "build_date" : "2018-04-12T20:37:28.497551Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

The kibana configuration is the following

root@saigon:~/elk: grep -v ^#  /etc/kibana/kibana.yml | grep -v ^$
logging.dest: /home/wilco/logs/kibana.log
xpack.reporting.encryptionKey: "AAAAA"
xpack.reporting.kibanaServer.port: 443
xpack.reporting.kibanaServer.protocol: https
xpack.reporting.kibanaServer.hostname: 163.xxx.xxx.xxx
elasticsearch.url: "https://localhost:9200"
elasticsearch.username: "kibana"
elasticsearch.password: "THE_PASS"
elasticsearch.ssl.certificate: "/root/elk/certs/ca/ca.crt"
elasticsearch.ssl.key: "/root/elk/certs/ca/ca.key"

kibana server is not there

root@saigon:~/elk# curl localhost:5601
curl: (7) Failed to connect to localhost port 5601: Connection refused
root@saigon:~/elk# lsof -i :5601
root@saigon:~/elk#

and the logs loops on this line:

{"type":"log","@timestamp":"2018-05-14T13:47:08Z","tags":["status","plugin:kibana@6.2.2","info"],"pid":9485,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-05-14T13:47:13Z","tags":["status","plugin:kibana@6.2.2","info"],"pid":9510,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-05-14T13:47:18Z","tags":["status","plugin:kibana@6.2.2","info"],"pid":9522,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2018-05-14T13:47:22Z","tags":["status","plugin:kibana@6.2.2","info"],"pid":9566,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
....

Hmm that really looks like your Kibana isn't running, since you did not overwrite the port in your kibana config. You are using docker-compose for it? Could you run a docker-compose ls?

I did not run kibana from docker, but from the debian package. here is the processes that are running: 1 kibana node process and 2 java ran from docker-compose

Could you execute the following command to show the open files and sockets of Kibana and paste its output here:

lsof -Pp 11460

Of course replace the 11460 with whatever process ID Kibana has on your system at that time.

No port open:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
45451 kibana    20   0  968432 129896  21160 R 106.2  0.2   0:01.64 node
45462 root      20   0   42824   3668   2992 R   6.2  0.0   0:00.01 top
    1 root      20   0  204716   7124   5344 S   0.0  0.0   1:41.77 systemd
root@saigon:~#
root@saigon:~# lsof -Pp 45451
root@saigon:~#

Is this the real output of your lsof? Because it should also print all opened files, and if it would return NOTHING at all, that sounds very strange, because then not even node could have been started?

There should be at least some opened libc libraries, a node or something. If that output is truly empty, it looks like there is something else broken with that system?

Ah OK, it seems that kibana is constantly restarting and the PID changes (I had an old one)

I have started kibana using service kibana start it is maybe service that restarts it if it fails constantly. BTW it is a debian 9

root@saigon:~# lsof -Pp $(ps -aef | grep kibana | grep -v grep | cut -d" " -f 4)
COMMAND   PID   USER   FD      TYPE             DEVICE SIZE/OFF     NODE NAME
node    46404 kibana  cwd       DIR                8,2     4096        2 /
node    46404 kibana  rtd       DIR                8,2     4096        2 /
node    46404 kibana  txt       REG                8,2 30559647 26741578 /usr/share/kibana/node/bin/node
node    46404 kibana  mem       REG                8,2  1689360 18087957 /lib/x86_64-linux-gnu/libc-2.24.so
node    46404 kibana  mem       REG                8,2   135440 18087974 /lib/x86_64-linux-gnu/libpthread-2.24.so
node    46404 kibana  mem       REG                8,2    92584 18087950 /lib/x86_64-linux-gnu/libgcc_s.so.1
node    46404 kibana  mem       REG                8,2  1063328 18087963 /lib/x86_64-linux-gnu/libm-2.24.so
node    46404 kibana  mem       REG                8,2  1566168 24384611 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.22
node    46404 kibana  mem       REG                8,2    31744 18087976 /lib/x86_64-linux-gnu/librt-2.24.so
node    46404 kibana  mem       REG                8,2    14640 18087962 /lib/x86_64-linux-gnu/libdl-2.24.so
node    46404 kibana  mem       REG                8,2   153288 18087951 /lib/x86_64-linux-gnu/ld-2.24.so
node    46404 kibana    0r      CHR                1,3      0t0     1028 /dev/null
node    46404 kibana    1u     unix 0xffff978ef3947000      0t0  7566642 type=STREAM
node    46404 kibana    2u     unix 0xffff978ef3947000      0t0  7566642 type=STREAM
node    46404 kibana    3r     FIFO               0,10      0t0  7566226 pipe
node    46404 kibana    4w     FIFO               0,10      0t0  7566226 pipe
node    46404 kibana    5u  a_inode               0,11        0     8919 [eventpoll]
node    46404 kibana    6r     FIFO               0,10      0t0  7566227 pipe
node    46404 kibana    7w     FIFO               0,10      0t0  7566227 pipe
node    46404 kibana    8u  a_inode               0,11        0     8919 [eventfd]
node    46404 kibana    9r      CHR                1,3      0t0     1028 /dev/null

Hmm in that case, could you check journalctl for any logs regarding the kibana service unit? If there is anything suspicious in there, that is causing it to restart?

there is not a lot of information:

root@saigon:~# journalctl -e
May 15 17:57:01 saigon systemd[1]: kibana.service: Unit entered failed state.
May 15 17:57:01 saigon systemd[1]: kibana.service: Failed with result 'exit-code'.
May 15 17:57:01 saigon systemd[1]: kibana.service: Service hold-off time over, schedulin
May 15 17:57:01 saigon systemd[1]: Stopped Kibana.
May 15 17:57:01 saigon systemd[1]: Started Kibana.
May 15 17:57:06 saigon systemd[1]: kibana.service: Main process exited, code=exited, sta
May 15 17:57:06 saigon systemd[1]: kibana.service: Unit entered failed state.
May 15 17:57:06 saigon systemd[1]: kibana.service: Failed with result 'exit-code'.
May 15 17:57:06 saigon systemd[1]: kibana.service: Service hold-off time over, schedulin
May 15 17:57:06 saigon systemd[1]: Stopped Kibana.
May 15 17:57:06 saigon systemd[1]: Started Kibana.
May 15 17:57:06 saigon sshd[48058]: Connection closed by 164.132.56.243 port 59886 [prea
May 15 17:57:11 saigon systemd[1]: kibana.service: Main process exited, code=exited, sta
May 15 17:57:11 saigon systemd[1]: kibana.service: Unit entered failed state.
May 15 17:57:11 saigon systemd[1]: kibana.service: Failed with result 'exit-code'.
May 15 17:57:11 saigon systemd[1]: kibana.service: Service hold-off time over, schedulin
May 15 17:57:11 saigon systemd[1]: Stopped Kibana.
May 15 17:57:11 saigon systemd[1]: Started Kibana.
May 15 17:57:16 saigon systemd[1]: kibana.service: Main process exited, code=exited, sta
May 15 17:57:16 saigon systemd[1]: kibana.service: Unit entered failed state.
May 15 17:57:16 saigon systemd[1]: kibana.service: Failed with result 'exit-code'.
May 15 17:57:16 saigon systemd[1]: kibana.service: Service hold-off time over, schedulin
May 15 17:57:16 saigon systemd[1]: Stopped Kibana.
May 15 17:57:16 saigon systemd[1]: Started Kibana.
May 15 17:57:20 saigon systemd[1]: kibana.service: Main process exited, code=exited, sta
May 15 17:57:20 saigon systemd[1]: kibana.service: Unit entered failed state.
May 15 17:57:20 saigon systemd[1]: kibana.service: Failed with result 'exit-code'.
May 15 17:57:21 saigon systemd[1]: kibana.service: Service hold-off time over, schedulin
May 15 17:57:21 saigon systemd[1]: Stopped Kibana.

It seems your columns are cut of especially at the interesting part :smiley: Main process exited, code=exited, sta.... ? Could you also share the rest of these lines?

sorry

May 15 18:04:55 saigon systemd[1]: Started Kibana.
May 15 18:04:57 saigon kernel: [UFW BLOCK] IN=eno1 OUT= MAC=44:a8:42:34:78:4c:18:8b:9d:ac:9d:69:08:00 SRC=185.222.211.150 DST=163.172.195.89 LEN
May 15 18:04:59 saigon systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
May 15 18:04:59 saigon systemd[1]: kibana.service: Unit entered failed state.
May 15 18:04:59 saigon systemd[1]: kibana.service: Failed with result 'exit-code'.
May 15 18:05:00 saigon systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
May 15 18:05:00 saigon systemd[1]: Stopped Kibana.
May 15 18:05:00 saigon systemd[1]: Started Kibana.
May 15 18:05:01 saigon CRON[1204]: pam_unix(cron:session): session opened for user root by (uid=0)
May 15 18:05:01 saigon CRON[1206]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
May 15 18:05:01 saigon CRON[1205]: pam_unix(cron:session): session opened for user wilco by (uid=0)
May 15 18:05:01 saigon CRON[1204]: pam_unix(cron:session): session closed for user root
May 15 18:05:01 saigon CRON[1208]: (wilco) CMD (flock -n /home/wilco/apps/bigdata/.python-lock /home/wilco/apps/bigdata/cron-bigdata.sh)
May 15 18:05:01 saigon CRON[1205]: (CRON) info (No MTA installed, discarding output)
May 15 18:05:01 saigon CRON[1205]: pam_unix(cron:session): session closed for user wilco
May 15 18:05:04 saigon systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
May 15 18:05:04 saigon systemd[1]: kibana.service: Unit entered failed state.
May 15 18:05:04 saigon systemd[1]: kibana.service: Failed with result 'exit-code'.
May 15 18:05:04 saigon systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
May 15 18:05:04 saigon systemd[1]: Stopped Kibana.
May 15 18:05:04 saigon systemd[1]: Started Kibana.
May 15 18:05:09 saigon systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
May 15 18:05:09 saigon systemd[1]: kibana.service: Unit entered failed state.
May 15 18:05:09 saigon systemd[1]: kibana.service: Failed with result 'exit-code'.
May 15 18:05:09 saigon systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
May 15 18:05:09 saigon systemd[1]: Stopped Kibana.
May 15 18:05:09 saigon systemd[1]: Started Kibana.
May 15 18:05:09 saigon kernel: [UFW BLOCK] IN=eno1 OUT= MAC=44:a8:42:34:78:4c:18:8b:9d:ac:9d:69:08:00 SRC=125.212.217.215 DST=163.172.195.89 LEN

Hello, could you do something with the logs?

I found that running kibana with the command line as root, it works :

/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.