Elasticsearch-setup-password kills elastic process

Hi, I'm setting up a new cluster with security after successfully fixing it (earlier post Help a non-profit STIR the data).

The difference is that I've now configured TLS on the HTTP interface, so my regular elasticsearch-setup-passwords auto -b doesn't work. So I added the -E xpack.security.http.ssl.verification_mode=certificate

This works upto about the middle of the process and then kills the elastic process, logs me out of the kubectl session and leaves me without a password for the user elastic.

This is the full output with the passwords redacted:

$ kubectl exec -it es-cluster-2 -- bash 
[root@es-cluster-2 elasticsearch]# elasticsearch-setup-passwords auto -b -E     xpack.security.http.ssl.verification_mode=certificate
Changed password for user apm_system
PASSWORD apm_system = 
    
Changed password for user kibana_system
PASSWORD kibana_system = 
    
command terminated with exit code 137

I'm running version 7.9 with a basic license.

I did an attempt with the configuration already setup for elastic and in interactive mode, the result is the same:

$ kubectl exec -it es-cluster-0 -- elasticsearch-setup-passwords interactive -b
Enter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana_system]: 
Reenter password for [kibana_system]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
command terminated with exit code 137

When it's about to set the elastic password, it dies. There's nothing in the logs, btw.

That looks very much like an out of memory issue - the process is being killed because it is exceeding the memory allocated to the container.

Hello Tim, thanks for the reply.

I have the following configuration:

          limits:
            cpu: 400m
            memory: 4G
          requests:
            cpu: 50m
            memory: 3G

And

        - name: ES_JAVA_OPTS
          value: -Xms3g -Xmx3g

Is this OK?

The ES nodes are probably running some other pods, but I would think Kubernetes takes care of this?

The gap size should be set to no more than 50% of available RAM and you have it set far beyond that. Either increase the RAM allocated or reduce the heap size.

1 Like

That's interecsing Christian. I don't claim to understand how this works, but what you say sounds like wasting a lot of ram. Can you point to a justification for this?

As a work-around I was able to create a new superuser using elasticsearch-users and from there I was able to login to kibana and change the elastic password from there.

Elasticsearch relies on the OS page cache for performance, so RAM is not wasted. It also stores some data structures off-heap which is why the 50% recommendation is in place. I would recommend reading this blog post and the docs for more information.

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.