Monitoring Cluster can't find production after recreated

Hi, this is my first time posting questions, please let met know if any additional info needed

Our company is planning to migrate ELK cluster to ECK on GKE, so I'm building 2 ECK clusters, one for production, one for monitoring.

I followed this guideline and it works great: Stack Monitoring | Elastic Cloud on Kubernetes [master] | Elastic . I could see both clusters' status on the monitoring cluster( I set the monitoring cluster to monitor itself).

However, after I deleted the monitoring cluster for some other reason and created again, the production cluster's status are gone. The monitoring cluster works fine and could get self status.

My question is: why my monitoring cluster can't find production after recreated? And how could I fix that?

Below are my monitoring yaml file:

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: es-monitoring
  namespace: elastic-monitoring
  labels:
    app: elasticsearch
spec:
  version: 8.11.0
  monitoring:
    metrics:
      elasticsearchRefs:
      - name: es-monitoring
        namespace: elastic-monitoring 
    logs:
      elasticsearchRefs:
      - name: es-monitoring
        namespace: elastic-monitoring
  image: gcr.io/feedtree-cc/elasticsearch:latest
  nodeSets:
  - name: master-nodes
    count: 1
    podTemplate:
      spec:
        initContainers:
          - name: sysctl
            securityContext:
              privileged: true
              runAsUser: 0
            command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        nodeSelector:
          cloud.google.com/gke-nodepool: es-test-pool
        containers:
        - name: elasticsearch
          resources:
            requests:
              memory: 4Gi
            limits:
              memory: 4Gi
    config:
      node.roles: ["master", "data", "ingest", "remote_cluster_client"]
  http:
    service:
      spec:
        type: LoadBalancer
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana-monitoring
  namespace: elastic-monitoring
  labels:
    app: kibana
spec:
  version: 8.11.0
  monitoring:
    metrics:
      elasticsearchRefs:
      - name: es-monitoring
        namespace: elastic-monitoring 
    logs:
      elasticsearchRefs:
      - name: es-monitoring
        namespace: elastic-monitoring
  count: 1
  elasticsearchRef:
    name: es-monitoring
  http:
    service:
      spec:
        type: LoadBalancer
  config:
    monitoring.ui.ccs.enabled: false
---

Here is my production yaml file:

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
  namespace: elastic-stack
  labels:
    app: elasticsearch
spec:
  version: 8.11.0
  monitoring:
    metrics:
      elasticsearchRefs:
      - name: es-monitoring
        namespace: elastic-monitoring 
    logs:
      elasticsearchRefs:
      - name: es-monitoring
        namespace: elastic-monitoring
  image: gcr.io/feedtree-cc/elasticsearch:latest
  nodeSets:
  - name: master-nodes
    count: 1
    podTemplate:
      spec:
        initContainers:
          - name: sysctl
            securityContext:
              privileged: true
              runAsUser: 0
            command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        nodeSelector:
          cloud.google.com/gke-nodepool: es-test-pool
        containers:
        - name: elasticsearch
          resources:
            requests:
              memory: 4Gi
            limits:
              memory: 4Gi
          volumeMounts:
          - name: analysis-ik
            mountPath: /usr/share/elasticsearch/config/analysis-ik/IKAnalyzer.cfg.xml
            subPath: IKAnalyzer.cfg.xml
        volumes:
        - name: analysis-ik
          configMap:
            name: analysis-ik
            items:
              - key: IKAnalyzer.cfg.xml
                path: IKAnalyzer.cfg.xml
    config:
      node.roles: ["master"]
  - name: data-nodes
    count: 3
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 300Gi
        storageClassName: standard
    podTemplate:
      spec:
        initContainers:
          - name: sysctl
            securityContext:
              privileged: true
              runAsUser: 0
            command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        nodeSelector:
          cloud.google.com/gke-nodepool: es-data-nodes
        containers:
        - name: elasticsearch
          resources:
            requests:
              memory: 4Gi
            limits:
              memory: 4Gi
          volumeMounts:
          - name: analysis-ik
            mountPath: /usr/share/elasticsearch/config/analysis-ik/IKAnalyzer.cfg.xml
            subPath: IKAnalyzer.cfg.xml
        volumes:
        - name: analysis-ik
          configMap:
            name: analysis-ik
            items:
              - key: IKAnalyzer.cfg.xml
                path: IKAnalyzer.cfg.xml
    config:
      node.roles: ["data"]
  - name: ingest-nodes
    count: 1
    podTemplate:
      spec:
        initContainers:
          - name: sysctl
            securityContext:
              privileged: true
              runAsUser: 0
            command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        nodeSelector:
          cloud.google.com/gke-nodepool: es-test-pool
        containers:
        - name: elasticsearch
          resources:
            requests:
              memory: 4Gi
            limits:
              memory: 4Gi
          volumeMounts:
          - name: analysis-ik
            mountPath: /usr/share/elasticsearch/config/analysis-ik/IKAnalyzer.cfg.xml
            subPath: IKAnalyzer.cfg.xml
        volumes:
        - name: analysis-ik
          configMap:
            name: analysis-ik
            items:
              - key: IKAnalyzer.cfg.xml
                path: IKAnalyzer.cfg.xml
    config:
      node.roles: ["ingest"]
  http:
    service:
      spec:
        type: LoadBalancer
  secureSettings:
  - secretName: gcp-key
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
  namespace: elastic-stack
  labels:
    app: kibana
spec:
  version: 8.11.0
  monitoring:
    metrics:
      elasticsearchRefs:
      - name: es-monitoring
        namespace: elastic-monitoring 
    logs:
      elasticsearchRefs:
      - name: es-monitoring
        namespace: elastic-monitoring
  count: 1
  elasticsearchRef:
    name: elasticsearch
  http:
    service:
      spec:
        type: LoadBalancer
---
apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
  name: logstash
  namespace: elastic-stack
  labels:
    app: logstash
spec:
  version: 8.11.0
  monitoring:
    metrics:
      elasticsearchRefs:
      - name: es-monitoring
        namespace: elastic-monitoring 
    logs:
      elasticsearchRefs:
      - name: es-monitoring
        namespace: elastic-monitoring
  image: gcr.io/feedtree-cc/logstash
  count: 1
  podTemplate:
    spec:
      containers:
      - name: logstash
        env:
        - name: ES_HOSTS
          value: "https://elasticsearch-es-http:9200"
        - name: ES_USER
          value: "elastic"
        - name: ES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elasticsearch-es-elastic-user
              key: elastic
        volumeMounts:
        - name: logstash-pipelines
          mountPath: /usr/share/logstash/config/pipelines.yml
          subPath: pipelines.yml
        - name: pipeline-test
          mountPath: /usr/share/logstash/pipeline/test.cfg
          subPath: test.cfg
        - name: gcp-key
          mountPath: /etc/logstash/gcs.client.default.credentials_file
          subPath: gcs.client.default.credentials_file
        - name: es-ca-cert
          mountPath: /etc/logstash/certificates
          readOnly: true
      volumes:
      - name: logstash-pipelines
        configMap:
          name: logstash-pipelines
          items:
            - key: pipelines.yml
              path: pipelines.yml
      - name: pipeline-test
        configMap:
          name: pipeline-test
          items:
            - key: test.cfg
              path: test.cfg
      - name: gcp-key
        secret:
          secretName: gcp-key
          items:
            - key: gcs.client.default.credentials_file
              path: gcs.client.default.credentials_file
      - name: es-ca-cert
        secret:
          secretName: elasticsearch-es-http-certs-public
---