Upgrade to kibana 7.5 failed (partially)

I upgraded my kibana instance from 4.4 to 7.5 today on my CentOS7 server.

I used the rpm method to install initially - so this is what I used to update.

The upgrade appeared successful - but it was a bit of a mess. I had to manually create a kibana user and group. I still can't get the job to start using systemctl. to get it to work I have to run: sudo -u kibana /usr/share/kibana/bin/kibana -c /etc/kibana/kibana &

The service IS set up to run as kibana in /etc/systemd/system/kibana.service.

I believe the problem is perhaps related to the permissions for /usr/share/kibana/optimize/.babel_register_cache.json - I have tried several different permissions for the file and the optimize folder. Right now the permissions look like this:

ls -ltra optimze/.babel_register_cache.json
-rw-rw-r-- 1 kibana kibana 158M Jan 15 20:53 optimize/.babel_register_cache.json

earlier, the ownership was for some deleted user/group 966:963 iirc

Here are some of the errors I'm seeing:

[root@ip-10-10-4-80 optimize]# systemctl status kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2020-01-15 20:04:54 UTC; 1min 0s ago
Process: 5716 ExecStart=/usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml (code=exited, status=1/FAILURE)
Main PID: 5716 (code=exited, status=1/FAILURE)

Jan 15 20:04:51 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: kibana.service: main process exited, code=exited, status=1/FAILURE
Jan 15 20:04:51 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: Unit kibana.service entered failed state.
Jan 15 20:04:51 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: kibana.service failed.
Jan 15 20:04:54 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: kibana.service holdoff time over, scheduling restart.
Jan 15 20:04:54 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: Stopped Kibana.
Jan 15 20:04:54 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: start request repeated too quickly for kibana.service
Jan 15 20:04:54 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: Failed to start Kibana.
Jan 15 20:04:54 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: Unit kibana.service entered failed state.
Jan 15 20:04:54 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: kibana.service failed.

from journalctl:

Jan 15 20:48:16 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: Started Kibana.
Jan 15 20:48:17 ip-10-10-4-80.us-west-2.compute.internal kibana[3712]: /usr/share/kibana/node_modules/@babel/register/lib/cache.js:80
Jan 15 20:48:17 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: kibana.service: main process exited, code=exited, status=1/FAILURE
Jan 15 20:48:17 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: Unit kibana.service entered failed state.
Jan 15 20:48:17 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: kibana.service failed.
Jan 15 20:48:20 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: kibana.service holdoff time over, scheduling restart.
Jan 15 20:48:20 ip-10-10-4-80.us-west-2.compute.internal systemd[1]: Stopped Kibana.

Any help is appreciated!

Interesting. I ran into a permissions problem on Kibana 7.5.1 with that same babel_register_cache.json file, but with trying to use the keystore. If you're using the keystore, that may be the problem. Go ahead and check out the topic I created to see if you think it's the same problem. Kibana 7.5.1 keystore permissions topic
To get kibana to run, I removed everything from the keystore & put it directly in the kibana.yml file. It all worked for me then. Note that I am working from a fresh install so there are differences.

My permissions on the .babel_register_cache.json file are the same as yours:
sudo ls -l /usr/share/kibana/optimize/.babel_register_cache.json
-rw-rw-r-- 1 kibana kibana 2 Jan 14 15:43 /usr/share/kibana/optimize/.babel_register_cache.json

One more thing, I have logging enabled in kibana.yml. It logs to /var/log/kibana/kibana.yml (I had to make the kibana dir and set permissions to "kibana:kibana"). That's where I got the detailed log explaining the fatal error was that the keys in the keystore couldn't be accessed. If you have logs going to /var/log/kibana/kibana.log you may get more information there. If you don't & you enable it, be warned that there is no log management for kibana so you'll need to use a service like logrotate to manage the logs or they'll just grow. In addition, there's very little documentation on how to configure logrotate for kibana. This site is helpful: Kibana - setup log rotation

Thanks for the reply

I wasn't using keystores.

I worked with permissions some more - with no luck. I probably could have gotten it if I had a working stack to check against. But I didn't have one available.

I ended up using yum to remove kibana. then I manually deleted all the kibana folders that were still hanging around (/usr/share/kibana, /opt/kibana, /etc/kibana (after I saved a copy of kibana.yml). I made sure the uninstall had cleaned out the kibana user (it had).

Then I re-installed. Once I changed the permissions to 774 on the log file I'd set up at /var/log/kibana/kibana.log - systemctl was able to start the service.

Glad to see you got kibana up & running! I'll go check the permissions on the kibana folders on my system.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.