Elasticsearch persistent settings

On occasion, it's necessary for us to redeploy our eck stack. Kubernetes makes this convenient, and we don't mind having to clear our persistent storage if it's necessary at the time, but it would be really helpful if other settings such as users, custom roles, etc could persist. And now that 7.7.0 has alarms (very excited about that by the way), we really don't want to recreate those often. Could someone point me in the right direction on the best way to do this with eck?

Apologies for the brevity, I wasn't sure how to explain or ask.

FWIW, we don't actually utilize pvc's in our environment for our persistent storage. I was told we can't use eck for persistent storage the way we do for our other apps. Because of this, ECK is deployed on a dedicated node using local storage.

        volumes:
        - name: elasticsearch-data
          hostPath: 
            path: /data/eck-elasticsearch

See this documentation page about declaratively creating users and roles.

Unfortunately there's no way currently to declaratively create your Kibana alerts in 7.7.0. it must be done through the API and/or the UI.
What you could do, I think, is setup a Kubernetes Job that performs those API calls for you (could be a simple bash script that relies on Elasticsearch authentication stored in the secrets).

I've already been down the road trying to create users and roles and I wasn't able to get it working. And based on the discussion here, I think it's well known it's far from intuitive. While the process is (sort of) documented, there are still enough gaps (password hash) that I wasn't able to accomplish the goal.

Scripting could work. I was actually considering trying to do something similar with the api calls for user creation since file based user creation was a bust. I think I'll try my luck at that.

Here is an example of a CronJob to curl the snapshot API. You can maybe achieve something similar?

I should have tried this months ago. I have struggled with the file based user/role crap for months. Not to mention every day I have to talk with my manager about why we can just use AD. "What do you mean it cost money, it's open source..."

But on your scripting suggestion, today I scripted the api calls to create the users and roles and it was pretty easy. Fingers crossed, the alerts are just as easy.

I'm an operations guy, not a coder. I don't speak a lot of the language used in this ephemeral world very fluently. I was avoiding the api like the plague because that screams programming to me. Turns out I was just making it harder on myself. I might not be a coder, but I can write a script.

Thanks for the tip, and pointing me in the right direction. I like the cron example too so I'm pretty sure I'll work with that some to incorporate creating the users automatically as part of the deploy.

In the event that it helps someone else and as a pay it forward, here is the kubernetes job I created that worked like a champ doing the curl commands for me in batch. I could have just as easily put them all in a bash script. I've learned a lot today. Thank you @sebgl for pointing me in the right direction.

---
apiVersion: batch/v1
kind: Job
metadata:
  name: descriptive-name-here
  namespace: descriptive-name-here
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: descriptive-name-here
        image: centos:7
        command:
        - "/bin/bash"
        - "-c"
        - |
          <curl commands here>

Also worth noting, I was lazy and just used the username/password in the curl commands. I think I'm going to go back now and use the "secret" method used in the cron example.