Transfering logs from a namespace to another

Hello,

After a lot of testing, I have a complete stack that works with, in the same namespace: nginx, Filebeat, Logstash, ES, Kibana.

Now, I try to run these same components but spread over two different namespaces:

  • namespace a: nginx, Filebeat, Logstash
  • namespace esk: ES, Kibana

In the namespace esk everything is ok (manual data injection is ok and so on).
In namespace a, Filebeat and Logstash pods remain blocked in "ContainerCreating".

I use:

  • Windows 10 PRO
  • WSL1 with Ubuntu 18.04
  • Docker Desktop

And, here are the main files I use :

filebeat.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: nsa
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-

    tags: ["nsa"]

    filebeat.config:
      modules:
        path: ${path.config}/modules.d/*.yml
        reload.enabled: false

    filebeat.autodiscover:

      providers:
        - type: kubernetes
          host: ${NODE_NAME}
          hints.enabled: true

          templates:
            - conditions.and:
                - contains.kubernetes.container.image: nginx
                - equals.kubernetes.namespace: nsa
              config:
                - module: nginx
                  access:
                    enabled: true
                    var.paths: ["/usr/share/filebeat/nginxlogs/access.log"]
                  error:
                    enabled: true
                    var.paths: ["/usr/share/filebeat/nginxlogs/error.log"]

    processors:
      - add_cloud_metadata:

      - add_host_metadata:
      - add_docker_metadata:

    output.logstash:
      hosts: ["logstash-nsa:5044"]

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: nsa
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
        - name: filebeat
          image: docker.elastic.co/beats/filebeat:7.8.0
          args: [
            "-c", "/etc/filebeat.yml",
            "-e",
          ]
          env:
            - name: ELASTICSEARCH_HOST
              value: elasticsearch-es-http
            - name: ELASTICSEARCH_PORT
              value: "9200"
            #- name: ELASTICSEARCH_USERNAME
              #value: elastic
            #- name: ELASTICSEARCH_PASSWORD
              #valueFrom:
                #secretKeyRef:
                  #key: elastic
                  #name: elasticsearch-es-elastic-user
            - name: NODE_NAME
              # value: elasticsearch-es-elasticsearch-0
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          securityContext:
            runAsUser: 0
          resources:
            limits:
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 100Mi
          volumeMounts:
            - name: config
              mountPath: /etc/filebeat.yml
              subPath: filebeat.yml
              readOnly: true
            - name: data
              mountPath: /usr/share/filebeat/data
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
            - name: varlog
              mountPath: /var/log
              readOnly: true
            #- name: es-certs
              #mountPath: /mnt/elastic/tls.crt
              #readOnly: true
              #subPath: tls.crt
            - name: nginxlogs
              mountPath: /usr/share/filebeat/nginxlogs

      volumes:
        - name: config
          configMap:
            defaultMode: 0600
            name: filebeat-config
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
        - name: varlog
          hostPath:
            path: /var/log
        - name: data
          hostPath:
            path: /var/lib/filebeat-data
            type: DirectoryOrCreate
        #- name: es-certs
          #secret:
            #secretName: elasticsearch-es-http-certs-public
        - name: nginxlogs
          hostPath:
            path: /c/PATH/TO/PERSISTENT/VOLUME/nginx-data

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
  - kind: ServiceAccount
    name: filebeat
    namespace: nsa
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
  - apiGroups: [""]
    resources:
      - namespaces
      - pods
    verbs:
      - get
      - watch
      - list

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: nsa
  labels:
    k8s-app: filebeat

---

logstash.yaml

---
apiVersion: v1
kind: Service
metadata:
  namespace: nsa
  labels:
    app: logstash-nsa
  name: logstash-nsa
spec:
  ports:
    - name: "25826"
      port: 25826
      targetPort: 25826
    - name: "5044"
      port: 5044
      targetPort: 5044
  selector:
    app: logstash-nsa
status:
  loadBalancer: {}

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: nsa
  name: logstash-configmap-nsa
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }

    filter {
      mutate { add_field => { "show" => "This data will be in the output" } }
      mutate { add_field => { "[@metadata][test1]" => "foo" } }
      mutate { add_field => { "[@metadata][test2]" => "bar" } }

      if [event][module] == "nginx-a" {
        if [fileset][name] == "access" {
          grok {
            match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
            remove_field => "message"
          }

          mutate {
            add_field => { "read_timestamp" => "%{@timestamp}" }
          }

          useragent {
            source => "[nginx][access][agent]"
            target => "[nginx][access][user_agent]"
            remove_field => "[nginx][access][agent]"
          }

          geoip {
            source => "[nginx][access][remote_ip]"
            target => "[nginx][access][geoip]"
          }
        }

        else if [fileset][name] == "error" {
          grok {
            match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
            remove_field => "message"
          }

          mutate {
            rename => { "@timestamp" => "read_timestamp" }
          }

          date {
            match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
            remove_field => "[nginx][error][time]"
          }
        }
      }
    }

    output {
      if "access" in [fileset][name] {
        elasticsearch {
          index => "access-%{[@metadata][beat]}-%{[@metadata][test1]}-%{+YYYY.MM.dd-H.m}"
          namespace => "namespace_esk"
          hosts => [ "${ES_HOSTS}" ]
          #user => "${ES_USER}"
          #password => "${ES_PASSWORD}"
          #cacert => '/etc/logstash/certificates/ca.crt'
        }
      }
      if "error" in [fileset][name] {
        elasticsearch {
          index => "error-%{[@metadata][beat]}-%{[@metadata][test2]}-%{+YYYY.MM.dd-H.m}"
          namespace => "namespace_esk"
          hosts => [ "${ES_HOSTS}" ]
          #user => "${ES_USER}"
          #password => "${ES_PASSWORD}"
          #cacert => '/etc/logstash/certificates/ca.crt'
        }
      }
    }

---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: logstash-nsa
  name: logstash-nsa
  namespace: nsa
spec:
  containers:
    - image: docker.elastic.co/logstash/logstash:7.8.0
      name: logstash
      ports:
        - containerPort: 25826
        - containerPort: 5044
      env:
        - name: ES_HOSTS
          value: "https://elasticsearch-es-http:9200"
        #- name: ES_USER
          #value: "elastic"
        #- name: ES_PASSWORD
          #valueFrom:
            #secretKeyRef:
              #name: elasticsearch-es-elastic-user
              #key: elastic
      resources: {}
      volumeMounts:
        - name: config-volume
          mountPath: /usr/share/logstash/config
        - name: logstash-pipeline-volume
          mountPath: /usr/share/logstash/pipeline
        #- name: cert-ca
          #mountPath: "/etc/logstash/certificates"
          #readOnly: true
  restartPolicy: OnFailure
  volumes:
    - name: config-volume
      configMap:
        name: logstash-configmap-nsa
        items:
          - key: logstash.yml
            path: logstash.yml
    - name: logstash-pipeline-volume
      configMap:
        name: logstash-configmap-nsa
        items:
          - key: logstash.conf
            path: logstash.conf
    #- name: cert-ca
      #secret:
        #secretName: elasticsearch-es-http-certs-public

status: {}

elasticsearch.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
  namespace: esk
spec:
  version: 7.8.0

  nodeSets:
    - name: elasticsearch
      count: 1
      config:
        node.store.allow_mmap: false
        node.master: true
        node.data: true
        node.ingest: true
        xpack.security.authc:
          anonymous:
            username: anonymous
            roles: superuser
            authz_exception: false
      podTemplate:
        metadata:
          labels:
            app: elasticsearch
        spec:
          initContainers:
            - name: sysctl
              securityContext:
                privileged: true
              command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
          containers:
            - name: elasticsearch
              resources:
                requests:
                  memory: 4Gi
                  cpu: 0.5
                limits:
                  memory: 4Gi
                  cpu: 1
              env:
                - name: ES_JAVA_OPTS
                  value: "-Xms2g -Xmx2g"
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            storageClassName: es-data
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 5Gi

...

...

With "kubectl logs" for Logstash, I get something like this:

[WARN ] 2020-08-02 19:39:20.360 [Ruby-0-Thread-5: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elasticsearch-es-http:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elasticsearch-es-http:9200/][Manticore::ResolutionFailure] elasticsearch-es-http: Name or service not known"}

And I get no error with "kubectl logs" for Filebeat.

I don't know at all how to manage the communication between two components located in different namespaces (Logstash and ES in my case).
I commented few lines related to secrets both in filebeat and logstash files (some of these lines are deleted in reality... just to show you). I don't really know if I have to use them for this training purpose (I want to do something the most simple as possible).

If anyone has an idea how to get me to pass the logs processed by Logstash from namespace a to ES in the second namespace, thanks in advance.

Guillaume.

Hi,

If Elasticsearch is running in a different namespace I would suggest to add the namespace in the hostname: elasticsearch-es-http.esk

Beats are supported starting 1.2.0 ECK, it will manage the association for you. If you want to give it a try the quickstart is available here.

I don't know at all how to manage the communication between two components located in different namespaces (Logstash and ES in my case).

This section in the documentation should help you to manually setup an association.

Hi Michael,

Thank you very much for your message and your help.
Thanks to the first tip, I now have Filebeat and Logstash running 1/1 !

There is one last problem (I think) : in Logstash logs, I now have this message :

[WARN ] 2020-08-03 11:27:11.142 [Ruby-0-Thread-5: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elasticsearch-es-http.esk:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elasticsearch-es-http.esk:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}

In my configuration file, I just deleted the lines that talk about a certificate because Logstash was not able to find it with the indicated path (the namespace is not the same anymore).

Is there a way to :

  • disable the requirement to need this certificate in ES
  • or how can I tell Logstash where the certificate is in the other namespace? I wanted to add the field "namespace: esk" in the configuration of the cert-ca volume but in the Logstash description I am told that this field does not exist for a volume:
    - name: cert-ca
          secret:
            namespace: esk
            secretName: elasticsearch-es-http-certs-public

Guillaume.

A Secret can only be referenced within the same namespace. You have to copy the Secret (only the Data field, not the owner references) from the esk namespace to the desired one.

You can also disable TLS as described here. But your data will then flow over the network in clear text and unencrypted.

I tried the method without TLS: I added this block at the same level as apiVersion, Kind (I hope it is good).

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
  namespace: esk
spec:
  http:
    tls:
      selfSignedCertificate:
        disabled: true
  version: 7.8.0
  nodeSets: [...]

Now I'm recovering this error with Logstash:

[ERROR] 2020-08-03 14:11:25.662 [[main]-pipeline-manager] javapipeline - Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Manticore::UnknownException: Unsupported or unrecognized SSL message>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37:in `block in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:74:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:332:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:261:in `health_check_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:270:in `block in healthcheck!'", "org/jruby/RubyHash.java:1415:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:266:in `healthcheck!'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:382:in `update_urls'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:82:in `update_initial_urls'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:76:in `start'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:302:in `build_pool'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:64:in `initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:105:in `create_http_client'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:101:in `build'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch.rb:274:in `build_client'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.5.1-java/lib/logstash/outputs/elasticsearch/common.rb:23:in `register'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:126:in `register'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:68:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:216:in `block in register_plugins'", "org/jruby/RubyArray.java:1809:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:215:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:519:in `maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:170:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125:in `block in start'"], "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x7bb83c0e@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:121 run>"}
[ERROR] 2020-08-03 14:11:25.674 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[INFO ] 2020-08-03 14:11:25.705 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2020-08-03 14:11:30.718 [LogStash::Runner] runner - Logstash shut down.

Here is my logstash.yaml file:

---
apiVersion: v1
kind: Service
metadata:
  namespace: nsa
  labels:
    app: logstash-nsa
  name: logstash-nsa
spec:
  ports:
    - name: "25826"
      port: 25826
      targetPort: 25826
    - name: "5044"
      port: 5044
      targetPort: 5044
  selector:
    app: logstash-nsa
status:
  loadBalancer: {}

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: nsa
  name: logstash-configmap-nsa
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }

    filter {
      mutate { add_field => { "show" => "This data will be in the output" } }
      mutate { add_field => { "[@metadata][test1]" => "foo" } }
      mutate { add_field => { "[@metadata][test2]" => "bar" } }

      if [event][module] == "nginx-a" {
        if [fileset][name] == "access" {
          grok {
            match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
            remove_field => "message"
          }

          mutate {
            add_field => { "read_timestamp" => "%{@timestamp}" }
          }

          useragent {
            source => "[nginx][access][agent]"
            target => "[nginx][access][user_agent]"
            remove_field => "[nginx][access][agent]"
          }

          geoip {
            source => "[nginx][access][remote_ip]"
            target => "[nginx][access][geoip]"
          }
        }

        else if [fileset][name] == "error" {
          grok {
            match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
            remove_field => "message"
          }

          mutate {
            rename => { "@timestamp" => "read_timestamp" }
          }

          date {
            match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
            remove_field => "[nginx][error][time]"
          }
        }
      }
    }

    output {
      if "access" in [fileset][name] {
        elasticsearch {
          index => "access-%{[@metadata][beat]}-%{[@metadata][test1]}-%{+YYYY.MM.dd-H.m}"
          hosts => [ "${ES_HOSTS}" ]
        }
      }
      if "error" in [fileset][name] {
        elasticsearch {
          index => "error-%{[@metadata][beat]}-%{[@metadata][test2]}-%{+YYYY.MM.dd-H.m}"
          hosts => [ "${ES_HOSTS}" ]
        }
      }
    }

---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: logstash-nsa
  name: logstash-nsa
  namespace: nsa
spec:
  containers:
    - image: docker.elastic.co/logstash/logstash:7.8.0
      name: logstash
      ports:
        - containerPort: 25826
        - containerPort: 5044
      env:
        - name: ES_HOSTS
          value: "https://elasticsearch-es-http.esk:9200"

      resources: {}
      volumeMounts:
        - name: config-volume
          mountPath: /usr/share/logstash/config
        - name: logstash-pipeline-volume
          mountPath: /usr/share/logstash/pipeline

  restartPolicy: OnFailure
  volumes:
    - name: config-volume
      configMap:
        name: logstash-configmap-nsa
        items:
          - key: logstash.yml
            path: logstash.yml
    - name: logstash-pipeline-volume
      configMap:
        name: logstash-configmap-nsa
        items:
          - key: logstash.conf
            path: logstash.conf
status: {}

I guess that you should replace https with http if you are disabling TLS.

Thank you very much! It's perfect!

Everything is working as it should now. With or without TLS, I managed to retrieve the Nginx logs from namespace_a in Kibana (in namespace_esk). By the way, I've created a new secret.yaml file here, in case anyone is interested :

apiVersion: v1
data:
  elastic: small_BLABLA
kind: Secret
metadata:
  labels:
    common.k8s.elastic.co/type: elasticsearch
    eck.k8s.elastic.co/credentials: "true"
    elasticsearch.k8s.elastic.co/cluster-name: elasticsearch
  name: elasticsearch-es-elastic-user
  namespace: nsa
type: Opaque
---

apiVersion: v1
data:
  ca.crt: big_ca_BLABLA
  tls.crt: big_tls_BLABLA
kind: Secret
metadata:
  labels:
    common.k8s.elastic.co/type: elasticsearch
    elasticsearch.k8s.elastic.co/cluster-name: elasticsearch
  name: elasticsearch-es-http-certs-public
  namespace: nsa
type: Opaque

I might have one last question:

  • Is there a way to automate the recovery and transformation of secrets (delete some fields, modify the namespace field and then switch them to the other namespace). For now, I've done it by hand, but in real life, it's of course not a viable solution.

Thanks again for all this help!

I agree that maintaining this Secret manually is not ideal. Maybe this can be achieved with a CronJob. An other solution might be to rely on https://github.com/IBM-Cloud/kube-samples/tree/master/secret-sync-operator but I didn't test it and can't guarantee that it meets your need.

Hello Michael,

I'll take a look at all this. Thank you!