Problem running elasticsearch in a minikube cluster

I have created a kubernetes cluster using minikube. created a helm chart of elasticsearch and installed it in the cluster.

I have also passed the environment variables from the chart's deployment file to configure elasticsearch:

      env:
          - name: ES_JAVA_OPTS
            value: -Xms32m -Xmx32m
          - name: bootstrap.memory_lock
            value: "true"
          - name: cluster.name
            value: dynizer
          - name: discovery.type
            value: single-node
          - name: network.bind_host
            value: 0.0.0.0
          - name: transport.host
            value: elasticsearch

I have set the container resources as:

 limits:
    cpu: 100m
    memory: 32Mi
  requests:
    cpu: 100m
    memory: 32Mi

The pod is in running state. But the log of the pod displays :

        [2018-10-04T09:57:24,790][WARN ][o.e.b.JNANatives         ] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
    [2018-10-04T09:57:25,091][WARN ][o.e.b.JNANatives         ] This can result in part of the JVM being swapped out.
    [2018-10-04T09:57:25,092][WARN ][o.e.b.JNANatives         ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
    [2018-10-04T09:57:25,092][WARN ][o.e.b.JNANatives         ] These can be adjusted by modifying /etc/security/limits.conf, for example: 
    	# allow user 'elasticsearch' mlockall
    	elasticsearch soft memlock unlimited
    	elasticsearch hard memlock unlimited
    [2018-10-04T09:57:25,093][WARN ][o.e.b.JNANatives         ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
    [2018-10-04T09:58:11,186][INFO ][o.e.n.Node               ] [] initializing ...
    [2018-10-04T09:58:24,183][INFO ][o.e.e.NodeEnvironment    ] [aasoChh] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [58.9gb], net total_space [117.2gb], types [ext4]
    [2018-10-04T09:58:24,220][INFO ][o.e.e.NodeEnvironment    ] [aasoChh] heap size [30.9mb], compressed ordinary object pointers [true]
    [2018-10-04T09:58:24,242][INFO ][o.e.n.Node               ] node name [aasoChh] derived from node ID [aasoChh5T_ealf7pgyKqiQ]; set [node.name] to override
    [2018-10-04T09:58:24,324][INFO ][o.e.n.Node               ] version[6.2.2], pid[1], build[10b1edd/2018-02-16T19:01:30.685723Z], OS[Linux/4.4.0-137-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_161/25.161-b14]
    [2018-10-04T09:58:24,590][INFO ][o.e.n.Node               ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.IFIs4dl0, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.cgroups.hierarchy.override=/, -Xms32m, -Xmx32m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config]
    [2018-10-04T10:18:52,761][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [aggs-matrix-stats]
    [2018-10-04T10:18:53,090][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [analysis-common]
    [2018-10-04T10:18:53,148][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [ingest-common]
    [2018-10-04T10:18:53,165][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [lang-expression]
    [2018-10-04T10:18:53,190][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [lang-mustache]
    [2018-10-04T10:18:53,190][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [lang-painless]
    [2018-10-04T10:18:53,190][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [mapper-extras]
    [2018-10-04T10:18:53,190][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [parent-join]
    [2018-10-04T10:18:53,191][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [percolator]
    [2018-10-04T10:18:53,192][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [rank-eval]
    [2018-10-04T10:18:53,208][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [reindex]
    [2018-10-04T10:18:53,208][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [repository-url]
    [2018-10-04T10:18:53,209][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [transport-netty4]
    [2018-10-04T10:18:53,209][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded module [tribe]
    [2018-10-04T10:18:53,425][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded plugin [ingest-geoip]
    [2018-10-04T10:18:53,425][INFO ][o.e.p.PluginsService     ] [aasoChh] loaded plugin [ingest-user-agent]

And also curl <ip>:<port> returns :

curl: (7) Failed to connect to 10.107.2.180 port 9200: Connection refused

where 10.107.2.180 is the clusterIP of the service.
When trying to connect other service to this elasticsearch service, the log says that

`panic: PANIC Can't connect to elastic http://elasticsearch:9200 [health check timeout: Head http://elasticsearch:9200: dial tcp: lookup elasticsearch on 10.96.0.10:53: no such host: no Elasticsearch node available.

Why the service is not available at this port?

From your error message it sounds like Elasticsearch couldn't allocate the memory needed to start up. There are two possible causes, either the server is out of memory or you try to assign too large Java Heap Size. And when I look at your environment settings I see that the latter is probably the cause:

You try to assign 32G Java Heap Size and that is too much, you should not go above 31G as is pointed out in the official Setting the heap size documentation:

Don’t set Xmx to above the cutoff that the JVM uses for compressed object pointers (compressed oops); the exact cutoff varies but is near 32 GB.

So try to reduce your ES_JAVA_OPTS values to either 31G or 30G to be safely under the JVM cutoff value.

@Bernt_Rostad

Actually ES_JAVA_OPTS value: -Xms32m -Xmx32m sets the heap size to 32MB (not 32GB) . Right? That means the specified heap size is far below the cutoff range.

As pointed out in the official documentation setting heap size :

Set the minimum and maximum heap size to 2 GB as :
ES_JAVA_OPTS="-Xms2g -Xmx2g" ./bin/elasticsearch

Set the minimum and maximum heap size to 4000 MB as:
ES_JAVA_OPTS="-Xms4000m -Xmx4000m" ./bin/elasticsearch

1 Like

Ah, sorry!! Yes, I'm so used to seeing G in ES_JAVA_OPTS that I only looked at the numbers. Sorry about that!

Yes, 32M is indeed far below the cutoff value. But it's probably also far too small to load all the Java classes and caches of the Elasticsearch process. I have never used a value below 2G and I notice that the default settings in the jvm.options file that comes with the 6.4.1 distribution is 1G so you should not use a value much below that.

Good luck!

Since i am using a minikube cluster i think i cannot set a memory more than 1GB. Because the minikube VM is configured by default to only use 1gb of memory. Am i right?

I have never heard of minikube so I don't know.

Do you run multiple Elasticsearch servers on the minikube? Do they have to share that 1G memory or do they get 1G of Java Heap Size each?

If 1G is the maximum you can assign, you could try to run with that but If you still get the "Cannot allocate memory" error then you need to expand the Java Heap Size somehow, though I'm afraid I can't help you with the minikube.

As an illustration. In an 8-node cluster I'm running I often see the memory usage go up to 8-10G per node when I run heavy queries, so I would not have been able to run that cluster with just 1G of Java Heap Size. But in your case, perhaps it will work with 1G?

It seems like 1GB is the default for minikube, but you should be able to override this. I believe Elasticsearch will start with around 512MB heap, but such a small heap will limit what you can do with it.

A minikube cluster has only one node. As of now, i am running only one elasticsearch service in that cluster. But i should be able to run some services other than elasticsearch also in that cluster. So if i set heap size to 512m , it may help. Let me try with it.

And i forgot to mention something regarding the resource requested by the service. Both of them are related as pointed out here Docker and java: why app is OOMkilled . I have updated the details in the question. Could you please check that and tell whether that settings will cause any issues or not?

@Christian_Dahlqvist

I tried with 512m heap size. But the container was continuously restarting after setting it.

Then I think you may need to assign more memory to minikube...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.