Picking up a configuration file change

Hi,

It seems that ECK does not pickup configuration changes. How do I make it pick it up? Do I restart the nodes? How can I do it via ECK?

Thanks

A configuration change (as described here) should automatically trigger a rolling restart of the nodes. Could you share more information about your use case ?

For an instance I have deployed nodes that have:

    config:
      node.master: true
      node.data: false
      node.ingest: false
      xpack.monitoring:
        enabled: false
        collection.enabled: false
        elasticsearch.collection.enabled: false
        exporters:
          platform-analytics:
            type: http
            host: 'http://${ES_MONITORING_HOST:localhost}:${ES_MONITORING_PORT:9200}'
            auth.username: '${ES_MONITORING_USERNAME:dummy}'
            auth.password: '${ES_MONITORING_PASSWORD:dummy}'

I change it to enable monitoring and apply the modified manifest to my kubernetes cluster

    config:
      node.master: true
      node.data: false
      node.ingest: false
      xpack.monitoring:
        enabled: true
        collection.enabled: true
        elasticsearch.collection.enabled: true
        exporters:
          platform-analytics:
            type: http
            host: 'http://${ES_MONITORING_HOST:localhost}:${ES_MONITORING_PORT:9200}'
            auth.username: '${ES_MONITORING_USERNAME:dummy}'
            auth.password: '${ES_MONITORING_PASSWORD:dummy}'

This does not get picked up and Elasticsearch nodes/pods do not restart.

BTW. What is the best way to restart elasticsearch k8s pods that are deployed with ECK?

  • Do I delete them with kubectl and let ECK reconciler bring it up again
  • Do I use kubectl scale?
  • Is there a command that I can issue for ECK to deal with the restart?

I did a quick test with the 2 configurations you have provided and it does restart the pod.

The hash of the config is used in the Pod template of the underlying statefulset to trigger a restart:

kubectl get sts -o yaml| grep "elasticsearch.k8s.elastic.co/config-hash"
elasticsearch.k8s.elastic.co/config-hash: "736351916"

Once a new configuration is applied:

kubectl get sts -o yaml| grep "elasticsearch.k8s.elastic.co/config-hash"
elasticsearch.k8s.elastic.co/config-hash: "1954170677"

  • Could you check the state of your cluster ? (green/red/yellow)
  • Could you provide your full manifest ?
  • Also could you check that in the Pod the configuration is the expected one ?

In order to unlock some situations you can delete the Pods (not the StatefulSet) but ECK should handle it for you.

I will try it out again and do some checks and then get back to you. Thanks

Where can I see the config file? I don't see it in kubernetes manifests. Do I have to log onto the pod and search for a file?

I tested it once more. It does not restart.

Before the config change:

    config:
      node.master: true
      node.data: false
      node.ingest: false
      xpack.monitoring:
        enabled: '${ES_MONITORING_ENABLE:false}'
        collection.enabled: '${ES_MONITORING_ENABLE:false}'
        elasticsearch.collection.enabled: '${ES_MONITORING_ENABLE:false}'
        exporters:
          platform-analytics:
            type: http
            host: 'http://${ES_MONITORING_HOST:localhost}:${ES_MONITORING_PORT:9200}'
            auth.username: '${ES_MONITORING_USERNAME:dummy}'
            auth.password: '${ES_MONITORING_PASSWORD:dummy}'
 kubectl get sts -o yaml| grep "elasticsearch.k8s.elastic.co/config-hash"
          elasticsearch.k8s.elastic.co/config-hash: "4122075556"
          elasticsearch.k8s.elastic.co/config-hash: "2173416708"
          elasticsearch.k8s.elastic.co/config-hash: "2898504457"

and after exporter name change

    config:
      node.master: true
      node.data: false
      node.ingest: false
      xpack.monitoring:
        enabled: '${ES_MONITORING_ENABLE:false}'
        collection.enabled: '${ES_MONITORING_ENABLE:false}'
        elasticsearch.collection.enabled: '${ES_MONITORING_ENABLE:false}'
        exporters:
          test:
            type: http
            host: 'http://${ES_MONITORING_HOST:localhost}:${ES_MONITORING_PORT:9200}'
            auth.username: '${ES_MONITORING_USERNAME:dummy}'
            auth.password: '${ES_MONITORING_PASSWORD:dummy}'
> kubectl get sts -o yaml| grep "elasticsearch.k8s.elastic.co/config-hash"
          elasticsearch.k8s.elastic.co/config-hash: "3694068196"
          elasticsearch.k8s.elastic.co/config-hash: "1602358852"
          elasticsearch.k8s.elastic.co/config-hash: "3600748037"

As you can see all 3 hashes changed.
I looked at all 7 of my pods and all of them have the change in their elasticsearch.yml, but none of them restarted. I know that because if they did they would have picked up changed secrets that populate ES_MONITORING_* env variables. They did not.

Below is my full manifest for the cluster (I have changed names before posting it here)

apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch

metadata:
  name: my-es

spec:
  version: 7.4.2

  updateStrategy:
    changeBudget:
      maxSurge: 2
      maxUnavailable: 0
  
  podDisruptionBudget:
    spec:
      maxUnavailable: 1
      selector:
        matchLabels:
          elasticsearch.k8s.elastic.co/cluster-name: my-es

  http:
    tls:
      selfSignedCertificate:
        disabled: true

  nodeSets:

  - count: 3
    name: master-node
    config:
      node.master: true
      node.data: false
      node.ingest: false
      xpack.monitoring:
        enabled: '${ES_MONITORING_ENABLE:false}'
        collection.enabled: '${ES_MONITORING_ENABLE:false}'
        elasticsearch.collection.enabled: '${ES_MONITORING_ENABLE:false}'
        exporters:
          my-monitor:
            type: http
            host: 'http://${ES_MONITORING_HOST:localhost}:${ES_MONITORING_PORT:9200}'
            auth.username: '${ES_MONITORING_USERNAME:dummy}'
            auth.password: '${ES_MONITORING_PASSWORD:dummy}'

    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi

    podTemplate:
      metadata:
        labels:
          app: my-es
          nodesGroup: master
      spec:
        affinity:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    elasticsearch.k8s.elastic.co/cluster-name: my-es
                topologyKey: kubernetes.io/hostname

        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']

        containers:
        - name: elasticsearch
          image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
          resources:
            limits:
              cpu: 1000m
              memory: 2Gi
            requests:
              cpu: 100m
              memory: 2Gi
          env:
          - name: ES_JAVA_OPTS
            value: "-Xms1512m -Xmx1512m"
          - name: ES_MONITORING_HOST
            valueFrom:
              secretKeyRef:
                key: host
                name: monitoring-elasticsearch
                optional: true
          - name: ES_MONITORING_PORT
            valueFrom:
              secretKeyRef:
                key: port
                name: monitoring-elasticsearch
                optional: true
          - name: ES_MONITORING_ENABLE
            valueFrom:
              secretKeyRef:
                key: accept-stack-monitoring-data
                name: monitoring-elasticsearch
                optional: true
          - name: ES_MONITORING_PASSWORD
            valueFrom:
              secretKeyRef:
                key: elastic
                name: monitoring-es-elastic-user
                optional: true
          - name: ES_MONITORING_USERNAME
            value: elastic

  - count: 2
    name: hot-data-node
    config:
      node.master: false
      node.data: true
      node.ingest: true
      node.attr.data: hot
      xpack.monitoring:
        enabled: '${ES_MONITORING_ENABLE:false}'
        collection.enabled: '${ES_MONITORING_ENABLE:false}'
        elasticsearch.collection.enabled: '${ES_MONITORING_ENABLE:false}'
        exporters:
          my-monitor:
            type: http
            host: 'http://${ES_MONITORING_HOST:localhost}:${ES_MONITORING_PORT:9200}'
            auth.username: '${ES_MONITORING_USERNAME:dummy}'
            auth.password: '${ES_MONITORING_PASSWORD:dummy}'

    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 40Gi

    podTemplate:
      metadata:
        labels:
          app: my-es
          nodesGroup: hot-data
      spec:
        affinity:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    elasticsearch.k8s.elastic.co/cluster-name: my-es
                topologyKey: kubernetes.io/hostname

        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']

        containers:
        - name: elasticsearch
          image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
          resources:
            limits:
              cpu: 1000m
              memory: 2Gi
            requests:
              cpu: 100m
              memory: 2Gi
          env:
          - name: ES_JAVA_OPTS
            value: "-Xms1g -Xmx1g"
          - name: ES_MONITORING_HOST
            valueFrom:
              secretKeyRef:
                key: host
                name: monitoring-elasticsearch
                optional: true
          - name: ES_MONITORING_PORT
            valueFrom:
              secretKeyRef:
                key: port
                name: monitoring-elasticsearch
                optional: true
          - name: ES_MONITORING_ENABLE
            valueFrom:
              secretKeyRef:
                key: accept-stack-monitoring-data
                name: monitoring-elasticsearch
                optional: true
          - name: ES_MONITORING_PASSWORD
            valueFrom:
              secretKeyRef:
                key: elastic
                name: monitoring-es-elastic-user
                optional: true
          - name: ES_MONITORING_USERNAME
            value: elastic

  - count: 2
    name: warm-data-node
    config:
      node.master: false
      node.data: true
      node.ingest: false
      node.attr.data: warm
      xpack.monitoring:
        enabled: '${ES_MONITORING_ENABLE:false}'
        collection.enabled: '${ES_MONITORING_ENABLE:false}'
        elasticsearch.collection.enabled: '${ES_MONITORING_ENABLE:false}'
        exporters:
          my-monitor:
            type: http
            host: 'http://${ES_MONITORING_HOST:localhost}:${ES_MONITORING_PORT:9200}'
            auth.username: '${ES_MONITORING_USERNAME:dummy}'
            auth.password: '${ES_MONITORING_PASSWORD:dummy}'

    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 150Gi

    podTemplate:
      metadata:
        labels:
          app: my-es
          nodesGroup: warm-data
      spec:
        affinity:
          podAntiAffinity:
            preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    elasticsearch.k8s.elastic.co/cluster-name: my-es
                topologyKey: kubernetes.io/hostname

        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']

        containers:
        - name: elasticsearch
          image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
          resources:
            limits:
              cpu: 1000m
              memory: 2Gi
            requests:
              cpu: 100m
              memory: 2Gi
          env:
          - name: ES_JAVA_OPTS
            value: "-Xms1g -Xmx1g"
          - name: ES_MONITORING_HOST
            valueFrom:
              secretKeyRef:
                key: host
                name: monitoring-elasticsearch
                optional: true
          - name: ES_MONITORING_PORT
            valueFrom:
              secretKeyRef:
                key: port
                name: monitoring-elasticsearch
                optional: true
          - name: ES_MONITORING_ENABLE
            valueFrom:
              secretKeyRef:
                key: accept-stack-monitoring-data
                name: monitoring-elasticsearch
                optional: true
          - name: ES_MONITORING_PASSWORD
            valueFrom:
              secretKeyRef:
                key: elastic
                name: monitoring-es-elastic-user
                optional: true
          - name: ES_MONITORING_USERNAME
            value: elastic

Sorry! the first hash above was from a different cluster. It was from the one that is meant to do the monitoring. So this is good that it has not changed. But still there was no restart even though configuration changed.

I will edit the above post to avoid confusion.

I did not pay attention to that at first as it was showing green, but it gets stuck in ApplyingChanges phase.

NAME                                                        HEALTH   NODES   VERSION   PHASE             AGE
elasticsearch.elasticsearch.k8s.elastic.co/analytics   green    7       7.4.2     ApplyingChanges   28m

Could you enable debug logs and send us the logs ?
Something seems to prevent the upgrade process to make some progress.

How can I attach a file here?

I enabled the debug log as described, but it does not seem to give a lot of info.
I can not attach whole file so below is a part from when I applied a changed manifest (around 14:51)

{"level":"info","@timestamp":"2020-01-09T14:47:52.595Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":44,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:47:53.358Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:47:53.674Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":44,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:47:53.674Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":44,"namespace":"default","name":"user-analytics","took":1.078530834}
{"level":"info","@timestamp":"2020-01-09T14:51:51.106Z","logger":"license-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":9,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:51:51.107Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":45,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:51:51.108Z","logger":"license-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":9,"namespace":"default","name":"user-analytics","took":0.001501708}
{"level":"info","@timestamp":"2020-01-09T14:51:51.872Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"Secret","namespace":"default","name":"user-analytics-es-master-node-es-config"}
{"level":"info","@timestamp":"2020-01-09T14:51:51.906Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"StatefulSet","namespace":"default","name":"user-analytics-es-master-node"}
{"level":"info","@timestamp":"2020-01-09T14:51:51.927Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"Secret","namespace":"default","name":"user-analytics-es-hot-data-node-es-config"}
{"level":"info","@timestamp":"2020-01-09T14:51:51.949Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"StatefulSet","namespace":"default","name":"user-analytics-es-hot-data-node"}
{"level":"info","@timestamp":"2020-01-09T14:51:51.981Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"Secret","namespace":"default","name":"user-analytics-es-warm-data-node-es-config"}
{"level":"info","@timestamp":"2020-01-09T14:51:51.998Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"StatefulSet","namespace":"default","name":"user-analytics-es-warm-data-node"}
{"level":"info","@timestamp":"2020-01-09T14:51:52.024Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:51:52.185Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":45,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:51:52.185Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":45,"namespace":"default","name":"user-analytics","took":1.078323031}
{"level":"info","@timestamp":"2020-01-09T14:51:52.185Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":46,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:51:52.957Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:51:53.092Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":46,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:51:53.114Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":46,"namespace":"default","name":"user-analytics","took":0.928669022}
{"level":"info","@timestamp":"2020-01-09T14:51:53.116Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":47,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:51:53.116Z","logger":"license-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":10,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:51:53.121Z","logger":"license-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":10,"namespace":"default","name":"user-analytics","took":0.004786726}
{"level":"info","@timestamp":"2020-01-09T14:51:53.894Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:51:54.037Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":47,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:51:54.037Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":47,"namespace":"default","name":"user-analytics","took":0.921244181}
{"level":"info","@timestamp":"2020-01-09T14:52:03.116Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":48,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:52:03.892Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:52:04.038Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":48,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:52:04.038Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":48,"namespace":"default","name":"user-analytics","took":0.921323182}
{"level":"info","@timestamp":"2020-01-09T14:52:14.038Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":49,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:52:14.785Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:52:14.912Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":49,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:52:14.912Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":49,"namespace":"default","name":"user-analytics","took":0.874325127}
{"level":"info","@timestamp":"2020-01-09T14:52:24.912Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":50,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:52:25.649Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:52:25.775Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":50,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-09T14:52:25.775Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":50,"namespace":"default","name":"user-analytics","took":0.862650764}

Are you sure that debug logs are enabled ? It is a little bit surprising that there's none here.

I'm looking for this message.

Yeap, I am sure. I double checked it and even tried again after restarting the operator pod. There is no Predicate... message.

I just changed the manifest back to original and applied it again. Now it shows Health: green and Phase: Ready. It is not stuck anymore.

@michael.morello Did you manage to reproduce the issue with the manifest file I sent? Do you need me to provide any more information?

No sorry, didn't have the chance to do it so far.
Regarding the logs you have provided I'm pretty convinced that debug logs are not enabled. The operator is way more verbose when running in debug mode, it is impossible to not have at least one message at the debug level.
That said if the operator has been able to make progress it will not be useful anymore.

I will try to deploy the operator from scratch with debug enabled and will see if there will be more log output. I am not sure why it is not printing more messages. ECK operator pod have restarted, so I would think that it picked up the log verbosity change.
I will post again with results of this experiment.

As mentioned earlier, I removed ECK operator completely and deployed it with debug enabled from the start. I am pretty sure this debug option does not work in the version I have: 1.0.0-beta1-84792e30. There is no extra output. See the logs below.

I have done a couple of tests and changing node counts works. They are added or removed appropriately. However changes in a config section do not cause nodes to restart. The new configuration is saved in config/elasticsearch.yml of all nodes though and config hashes change. This was checked with:

kubectl get sts -o yaml| grep "elasticsearch.k8s.elastic.co/config-hash"

After applying a configuration change Elasticsearch moves into ApplyingChanges phase and gets stuck in there.

NAME                                                        HEALTH   NODES   VERSION   PHASE             AGE
elasticsearch.elasticsearch.k8s.elastic.co/user-analytics   green    7       7.4.2     ApplyingChanges   61m

It is possible to get it back into Ready phase by applying a previous manifest, i.e. with changes revoked.

Logs: example from the time a modified config was applied:

{"level":"info","@timestamp":"2020-01-13T14:57:57.754Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":156,"namespace":"default","name":"user-analytics","took":0.884567519}
{"level":"info","@timestamp":"2020-01-13T15:13:40.362Z","logger":"license-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":18,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:40.362Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":157,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:40.363Z","logger":"license-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":18,"namespace":"default","name":"user-analytics","took":0.000984806}
{"level":"info","@timestamp":"2020-01-13T15:13:41.110Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"Secret","namespace":"default","name":"user-analytics-es-master-node-es-config"}
{"level":"info","@timestamp":"2020-01-13T15:13:41.124Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"StatefulSet","namespace":"default","name":"user-analytics-es-master-node"}
{"level":"info","@timestamp":"2020-01-13T15:13:41.140Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"Secret","namespace":"default","name":"user-analytics-es-hot-data-node-es-config"}
{"level":"info","@timestamp":"2020-01-13T15:13:41.159Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"StatefulSet","namespace":"default","name":"user-analytics-es-hot-data-node"}
{"level":"info","@timestamp":"2020-01-13T15:13:41.179Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"Secret","namespace":"default","name":"user-analytics-es-warm-data-node-es-config"}
{"level":"info","@timestamp":"2020-01-13T15:13:41.205Z","logger":"generic-reconciler","message":"Updating resource","ver":"1.0.0-beta1-84792e30","kind":"StatefulSet","namespace":"default","name":"user-analytics-es-warm-data-node"}
{"level":"info","@timestamp":"2020-01-13T15:13:41.233Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:41.353Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":157,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:41.353Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":157,"namespace":"default","name":"user-analytics","took":0.990662841}
{"level":"info","@timestamp":"2020-01-13T15:13:41.353Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":158,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:42.127Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:42.231Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":158,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:42.257Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":158,"namespace":"default","name":"user-analytics","took":0.903315188}
{"level":"info","@timestamp":"2020-01-13T15:13:42.258Z","logger":"license-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":19,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:42.260Z","logger":"license-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":19,"namespace":"default","name":"user-analytics","took":0.00186421}
{"level":"info","@timestamp":"2020-01-13T15:13:42.261Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":159,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:42.984Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:43.099Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":159,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:43.099Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":159,"namespace":"default","name":"user-analytics","took":0.83816625}
{"level":"info","@timestamp":"2020-01-13T15:13:52.257Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":160,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:53.009Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:53.119Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":160,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:13:53.119Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":160,"namespace":"default","name":"user-analytics","took":0.861776076}
{"level":"info","@timestamp":"2020-01-13T15:14:03.119Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":161,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:03.857Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:03.976Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":161,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:03.976Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":161,"namespace":"default","name":"user-analytics","took":0.857081955}
{"level":"info","@timestamp":"2020-01-13T15:14:13.976Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":162,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:14.700Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:14.811Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":162,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:14.811Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":162,"namespace":"default","name":"user-analytics","took":0.835009944}
{"level":"info","@timestamp":"2020-01-13T15:14:24.811Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":163,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:25.526Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:25.639Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":163,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:25.640Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":163,"namespace":"default","name":"user-analytics","took":0.828207711}
{"level":"info","@timestamp":"2020-01-13T15:14:35.640Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":164,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:36.361Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:36.462Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":164,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:36.462Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":164,"namespace":"default","name":"user-analytics","took":0.822257384}
{"level":"info","@timestamp":"2020-01-13T15:14:46.462Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":165,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:47.169Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:47.276Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":165,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:47.276Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":165,"namespace":"default","name":"user-analytics","took":0.814236045}
{"level":"info","@timestamp":"2020-01-13T15:14:57.277Z","logger":"elasticsearch-controller","message":"Starting reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":166,"namespace":"default","name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:58.014Z","logger":"zen2","message":"Ensuring no voting exclusions are set","ver":"1.0.0-beta1-84792e30","namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:58.123Z","logger":"elasticsearch-controller","message":"Updating status","ver":"1.0.0-beta1-84792e30","iteration":166,"namespace":"default","es_name":"user-analytics"}
{"level":"info","@timestamp":"2020-01-13T15:14:58.123Z","logger":"elasticsearch-controller","message":"Ending reconciliation run","ver":"1.0.0-beta1-84792e30","iteration":166,"namespace":"default","name":"user-analytics","took":0.846484516}

@michael.morello Where can I raise a bug?