i have the following elastic-search deployment described:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
annotations:
common.k8s.elastic.co/controller-version: 1.0.0
elasticsearch.k8s.elastic.co/cluster-uuid: 1gZHNvZSSwK_0jP7QrFJpg
creationTimestamp: "2020-01-30T04:21:35Z"
generation: 6
name: logs
namespace: logs
resourceVersion: "33407416"
selfLink: /apis/elasticsearch.k8s.elastic.co/v1/namespaces/logs/elasticsearches/logs
uid: d346d27a-1717-4627-b34a-2aaf86de020f
spec:
http:
service:
metadata:
creationTimestamp: null
spec:
type: LoadBalancer
tls:
certificate:
secretName: magneto-tls
nodeSets:
- config:
node.data: true
node.ingest: true
node.master: true
node.store.allow_mmap: true
count: 3
name: default
podTemplate:
metadata:
creationTimestamp: null
spec:
containers:
- env:
- name: ES_JAVA_OPTS
value: -Xms4g -Xmx4g
name: elasticsearch
resources:
requests:
memory: 8Gi
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: elasticsearch-data
namespace: logs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: gce-ssd
status: {}
updateStrategy:
changeBudget: {}
version: 7.5.0
Initially this started off with using the default settings for java options. However our deployment began to fall over due to the default memory settings simply not being enough. As such the ES_JAVA_OPTIONS flag was added, unfortunately this has not changed anything, even forcing deletion and re-creation of the pod does not lead to the increase in Xmx or Xmx on the container.
usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF
is still used for java options.
I'm not sure i'm doing something wrong here or if this is a bug?