Xpack installation error - segmentation fault

I installed x-pack plugin for Elasticsearch. When I tried installing as Kibana plugin, it is throwing me segmentation fault error. Following this procedure. I ran offline installation of x-pack using

bin/kibana-plugin install file:///path/to/file/x-pack-5.0.2.zip

Below is the entire error:

Attempting to transfer from file:///path/to/file/x-pack-5.0.2.zip
Transferring `<some number>` bytes....................
Transfer complete
Retrieving metadata from plugin archive
Extracting plugin archive
Extraction complete
Optimizing and caching browser bundles...
Segmentation fault

I am running my server behind proxy, and so direct installation is not working with
bin/elasticsearch-plugin install x-pack

On RHEL 7.2, I am running Oracle Java 8 with _JAVA_OPTIONS = "-Xms1024m -Xmx1024m".
I also set ES_JAVA_OPTS = "-Xms1024m -Xmx1024m"

What could be the source of error? and how to solve this?
I saw this post, but seems to be closed because of inactivity.

Hey,

dont worry about any java settings, as kibana and its plugin installation is not based on java.

First, can you setup a temporary internet connection and can try to run bin/kibana-plugin install x-pack?

Also some more infos would be useful. Which Linux is this (RH 7.2 - is this centos?), can you run uname -a? Did you download the correct 32bit/64bit version of kibana?

--Alex

uname -a

Linux <FQDN> ... x86_64 GNU/Linux

cat /etc/redhat-release

Red Hat Enterprise Linux server 7.2 (Maipo)

I inatalled x64 version of Kibana. i saw during installation steps.
I tried with direct installation instead of offline, and its not working. I even exported correct http_proxy and https_proxy env variables.. when i did bin/kibana-plugin install x-pack, I get the error

Client request error. getaddrinfo ENOTFOUND artifacts.elastic.co:443

Coming back to offline install, I installed Kibana and ran it before installation of x-pack, and it was working well. I could see the indices in Kibana loaded from elasticsearch by logging in as users, which i created in terminal of elasticsearch. But I could not see monitoring tab or UI for creating users and roles. That is my primary use case to migrate from shield (create users with UI).

Also, as mentioned in the same post for installing x-pack, I am running the installation as root user(super user permissions).

I checked the log produced during that error in /var/log/messages. It was showing segfault error 6. From this post, it means: The cause was a user-mode write resulting in no page being found.

I see that before the x-pack installation, in usr/share/kibana/optimize/bundles, there were some files namely kibana.bundle.js, timelion.bundle.js. During the installation of x-pack, I think these files are being overwritten? and probably this error is caused. After the error, I couldn't see these 2 files in that folder.

I see that Segfaults are mostly because of ulimit settings. Below is the output of ulimit -a:

core file size (blocks, -c) 12197 data seg size (kbytes, -d) 12197 scheduling priority (-e) 1248988 file size (blocks, -f) 12197 pending signals (-i) 1248988 max locked memory (kbytes, -l) 12197 max memory size (kbytes, -m) 12197 open files (-n) 1022 pipe size (512 bytes, -p) 8 stack size (kbytes, -s) 0 cpu time (second, -t) 1022 max user processes (-u) 1022 virtual memory (kbytes, -v) 0

Could it be a problem with stack size or file size limits above? I tried changing them in .bashrc by setting these 2 parameters to unlimited and logged back into the ssh session, but those parameters are not getting changed, and so the same error persists.

Hi @krishna_chaitanya,

your suspicion about ulimits being the cause might be correct. To persistently set the limits take a look at the file /etc/security/limits.conf. See man limits.conf or https://linux.die.net/man/5/limits.conf for details.

I think in some cases the seg fault is caused by running out of memory. I'm afraid I don't know the details, but how much ram does this machine have? If its not a lot, and you have Elasticsearch also running, maybe you could stop it and Kibana while installing x-pack on Kibana. You can install/remove Kibana plugins while it's running, but it requires a restart to use the new plugins anyway, so you might as well have Kibana stopped while installing plugins.

Regards,
Lee

I have around 2GB free memory in my machine. Total RAM is 4GB. Just checked with free.

The ulimits never get updated, even though I am a root user and edited the soft and hard limits in /etc/security/limits.conf. I logged out of ssh session and logged back in. These changes are not getting reflected. I checked with ulimit -a I also checked with re-installing Kibana and x-pack again, and it still gives segfault during x-pack installation.

Below is limits.conf:

root soft memlock unlimited
root hard memlock unlimited
root soft fsize unlimited
root hard fsize unlimited
root soft stack 8192
root hard stack 8192
kibana soft memlock unlimited
kibana hard memlock unlimited
kibana soft fsize unlimited
kibana hard fsize unlimited
kibana soft stack 8192
kibana hard memlock 8192

Any suggestion, how to make the changes to limits.conf effective?

I tried logging out, and logging back in. I also restarted the server. Nothing worked.

Ok, when suggesting the limits.conf file I was not paying attention to the fact that you are running RHEL 7. In that case, the limits for system services are managed by systemd on a per-service basis. You can find all available limit settings in the systemd.exec man page. To avoid having to modify the upstream .service file, it is recommended to use systemd's drop-in mechanism as described in the systemd.unit man page. In your case that would mean to add a file at the location /etc/systemd/system/kibana.conf.d/limits.conf containing something like

[service] LimitMEMLOCK=infinity LimitFSIZE=infinity LimitSTACK=8192

Don't forget to run systemctl daemon-reload after changing the file before systemctl restarting. In general I would advise to use sensible number values instead of infinity to preserve system stability.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.