Can not modify my fleet server and agent configuration on ECK

I deployed eck-operator on my microk8s system.

Then I used eck-operator to deploy Elasticsearch and Kibana.

Next, I wanted to deploy Fleet and its agent. I added the configuration to Kibana (as shown in Figure 1).

After adding the configuration, I deployed Fleet and its agent. When I saw an error on the Kibana Fleet page, I realized I had misspelled the Fleet configuration in Figure 1. It should be fleet-server-agent-http , but I wrote fleet-server instead.

So I deleted the Fleet and agent deployments, modified the Kibana Fleet configuration (See the Kibana source code below), and applied it to Kibana.

However, when I re-deployed Fleet and its agent, I found that they were still using the wrong Fleet address.

There are my deploy files:

Kibana:

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana-cluster
  namespace: elasticsearch
spec:
  version: 8.19.9
  count: 1
  elasticsearchRef:
    name: elasticsearch-cluster
  config:
    xpack.fleet.agents.elasticsearch.hosts: ["https://elasticsearch-cluster-es-http.elasticsearch.svc:9200"]
    xpack.fleet.agents.fleet_server.hosts: ["https://fleet-server-agent-http.elasticsearch.svc:8220"]
    xpack.fleet.packages:
      - name: system
        version: latest
      - name: elastic_agent
        version: latest
      - name: fleet_server
        version: latest
    xpack.fleet.agentPolicies:
      - name: Fleet Server on ECK policy
        id: eck-fleet-server
        namespace: elasticsearch
        is_managed: true
        monitoring_enabled:
          - logs
        unenroll_timeout: 900
        package_policies:
        - name: fleet_server-1
          id: fleet_server-1
          package:
            name: fleet_server
      - name: Elastic Agent on ECK policy
        id: eck-agent
        namespace: elasticsearch
        is_managed: true
        monitoring_enabled:
          - logs
        unenroll_timeout: 900
        package_policies:
          - name: system-1
            id: system-1
            package:
              name: system
  podTemplate:
    spec:
      imagePullSecrets:
      - name: harbor
      containers:
      - name: kibana
        resources:
          limits:
            memory: 4Gi
            cpu: 2
          requests:
            memory: 2Gi
            cpu: 1

Fleet:

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: fleet-server
  namespace: elasticsearch
spec:
  version: 8.19.9
  elasticsearchRefs:
  - name: elasticsearch-cluster
  kibanaRef:
    name: kibana-cluster
  mode: fleet
  fleetServerEnabled: true
  policyID: eck-fleet-server
  deployment:
    replicas: 1
    podTemplate:
      spec:
        serviceAccountName: elastic-agent
        automountServiceAccountToken: true
        securityContext:
          runAsUser: 0
        imagePullSecrets:
          - name: harbor
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: elastic-agent
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - pods
  - nodes
  - namespaces
  verbs:
  - get
  - watch
  - list
- apiGroups: ["coordination.k8s.io"]
  resources:
  - leases
  verbs:
  - get
  - create
  - update
- apiGroups: ["apps"]
  resources:
  - replicasets
  verbs:
  - list
  - watch
- apiGroups: ["batch"]
  resources:
  - jobs
  verbs:
  - list
  - watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elastic-agent
  namespace: elasticsearch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: elastic-agent
subjects:
- kind: ServiceAccount
  name: elastic-agent
  namespace: elasticsearch
roleRef:
  kind: ClusterRole
  name: elastic-agent
  apiGroup: rbac.authorization.k8s.io

Agent:

apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: elastic-agent
  namespace: elasticsearch
spec:
  version: 8.19.9
  kibanaRef:
    name: kibana-cluster
  fleetServerRef:
    name: fleet-server
  mode: fleet
  policyID: eck-agent
  daemonSet:
    podTemplate:
      spec:
        serviceAccountName: elastic-agent
        automountServiceAccountToken: true
        securityContext:
          runAsUser: 0
        imagePullSecrets:
          - name: harbor
        volumes:
        - name: agent-data
          emptyDir: {}

Can anyone help me?

Hi @NiceAgent

It is a holiday week / weekend so it may take a little bit longer for people to reply

But also you haven't provided any details on what commands you've ran, how you validated that the configuration did not get applied so it's hard for us to help

How Have you inspected the configuration?

Did you actually delete the configuration and then reapply the new configuration? Or did you just try to apply the new configuration on top!

Did you describe the pod?

What steps did you follow?

It is a holiday week / weekend so it may take a little bit longer for people to reply

Oh! I'm really very sorry! I forgot that it's holiday week over there. :joy: I was getting a little anxious because I had tried for a long time without success. I will wait quietly for the rest.

But also you haven't provided any details on what commands you've ran, how you validated that the configuration did not get applied so it's hard for us to help

I will do my best to provide all the information I can obtain. Please let me know if any information is missing. I will complete it as soon as possible.

This is my workspace:

app@dev01:~/sr590/k8s$ tree eck elasticsearch/
eck
└── eck-operator.yaml
elasticsearch/
├── apm-server.yaml
├── cluster.yaml
├── elastic-agent.yaml
├── filebeat.yaml.disabled
├── fleet-server.yaml
├── kibana-nodeport.yaml
├── kibana.yaml
└── namespace.yaml

0 directories, 8 files

When I want to clear a previous deployment, I execute the following command:

kubectl delete -f elasticsearch/kibana.yaml
kubectl delete -f elasticsearch/fleet-server.yaml
kubectl delete -f elasticsearch/elastic-agent.yaml

Below is the log of my re-execution.

app@dev01:~/sr590/k8s$ kubectl delete -f elasticsearch/kibana.yaml
kibana.kibana.k8s.elastic.co "kibana-cluster" deleted
app@dev01:~/sr590/k8s$ kubectl delete -f elasticsearch/fleet-server.yaml
agent.agent.k8s.elastic.co "fleet-server" deleted
clusterrole.rbac.authorization.k8s.io "elastic-agent" deleted
serviceaccount "elastic-agent" deleted
clusterrolebinding.rbac.authorization.k8s.io "elastic-agent" deleted
app@dev01:~/sr590/k8s$ kubectl delete -f elasticsearch/elastic-agent.yaml
agent.agent.k8s.elastic.co "elastic-agent" deleted

Lets check pods:

app@dev01:~/sr590/k8s$ kubectl get pods -n elasticsearch
NAME                                    READY   STATUS    RESTARTS   AGE
apm-server-apm-server-9f487d649-p6s5j   1/1     Running   0          71s
elasticsearch-cluster-es-default-0      1/1     Running   0          3d

First i apply kibana

app@dev01:~/sr590/k8s$ kubectl apply -f elasticsearch/kibana.yaml
kibana.kibana.k8s.elastic.co/kibana-cluster created

A few seconds ~ decribe kibana:

app@dev01:~/sr590/k8s$ kubectl describe pod kibana-cluster-kb-575dbdb548-rc6kd -n elasticsearch
Name:             kibana-cluster-kb-575dbdb548-rc6kd
Namespace:        elasticsearch
Priority:         0
Service Account:  default
Node:             server/43.xxx.xxx.xxx
Start Time:       Sun, 28 Dec 2025 06:21:10 +0000
Labels:           common.k8s.elastic.co/type=kibana
                  kibana.k8s.elastic.co/name=kibana-cluster
                  kibana.k8s.elastic.co/version=8.19.9
                  pod-template-hash=575dbdb548
Annotations:      cni.projectcalico.org/containerID: e56bf3b7137e666be8e7abb9101726511cb01d8abdcfe22f284b23995cc277b3
                  cni.projectcalico.org/podIP: 10.1.206.222/32
                  cni.projectcalico.org/podIPs: 10.1.206.222/32
                  co.elastic.logs/module: kibana
                  kibana.k8s.elastic.co/config-hash: 360072754
Status:           Running
IP:               10.1.206.222
IPs:
  IP:           10.1.206.222
Controlled By:  ReplicaSet/kibana-cluster-kb-575dbdb548
Init Containers:
  elastic-internal-init-config:
    Container ID:  containerd://d5f6a84a63118fe654af5ccd759df42d5019104d721fc5866833057cf132683a
    Image:         harbor.xxx.com/elastic/kibana/kibana:8.19.9
    Image ID:      harbor.xxx.com/elastic/kibana/kibana@sha256:9778bb69b2c90da5cf2c4284c066ba030e7d419a6d6e3450b7b4c93a56c93a94
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/bin/env
      bash
      -c
      #!/usr/bin/env bash
      set -eux

      init_config_initialized_flag=/mnt/elastic-internal/kibana-config-local/elastic-internal-init-config.ok

      if [[ -f "${init_config_initialized_flag}" ]]; then
          echo "Kibana configuration already initialized."
        exit 0
      fi

      echo "Setup Kibana configuration"

      ln -sf /mnt/elastic-internal/kibana-config/* /mnt/elastic-internal/kibana-config-local/

      touch "${init_config_initialized_flag}"
      echo "Kibana configuration successfully prepared."

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 28 Dec 2025 06:21:10 +0000
      Finished:     Sun, 28 Dec 2025 06:21:10 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_IP:      (v1:status.podIP)
      POD_NAME:   kibana-cluster-kb-575dbdb548-rc6kd (v1:metadata.name)
      NODE_NAME:   (v1:spec.nodeName)
      NAMESPACE:  elasticsearch (v1:metadata.namespace)
    Mounts:
      /mnt/elastic-internal/http-certs from elastic-internal-http-certificates (ro)
      /mnt/elastic-internal/kibana-config from elastic-internal-kibana-config (ro)
      /mnt/elastic-internal/kibana-config-local from elastic-internal-kibana-config-local (rw)
      /usr/share/kibana/config/elasticsearch-certs from elasticsearch-certs (ro)
      /usr/share/kibana/data from kibana-data (rw)
Containers:
  kibana:
    Container ID:   containerd://712da1c9c9cb65d99974203bf4ecefa4e7e1312122e3ac3a6046978608d0f987
    Image:          harbor.xxx.com/elastic/kibana/kibana:8.19.9
    Image ID:       harbor.xxx.com/elastic/kibana/kibana@sha256:9778bb69b2c90da5cf2c4284c066ba030e7d419a6d6e3450b7b4c93a56c93a94
    Port:           5601/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 28 Dec 2025 06:21:10 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  4Gi
    Requests:
      cpu:        1
      memory:     2Gi
    Readiness:    http-get https://:5601/login delay=10s timeout=5s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /mnt/elastic-internal/http-certs from elastic-internal-http-certificates (ro)
      /mnt/elastic-internal/kibana-config from elastic-internal-kibana-config (ro)
      /usr/share/kibana/config from elastic-internal-kibana-config-local (rw)
      /usr/share/kibana/config/elasticsearch-certs from elasticsearch-certs (ro)
      /usr/share/kibana/data from kibana-data (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  elastic-internal-http-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kibana-cluster-kb-http-certs-internal
    Optional:    false
  elastic-internal-kibana-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kibana-cluster-kb-config
    Optional:    false
  elastic-internal-kibana-config-local:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  elasticsearch-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kibana-cluster-kb-es-ca
    Optional:    false
  kibana-data:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:   <unset>
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>

Then apply the fleet-server:

app@dev01:~/sr590/k8s$ kubectl apply -f elasticsearch/fleet-server.yaml
agent.agent.k8s.elastic.co/fleet-server created
clusterrole.rbac.authorization.k8s.io/elastic-agent created
serviceaccount/elastic-agent created
clusterrolebinding.rbac.authorization.k8s.io/elastic-agent created

A few few few seconds ~ check it:

app@dev01:~/sr590/k8s$ kubectl describe pods fleet-server-agent-7cdfb65dc4-45bx2 -n elasticsearch
Name:             fleet-server-agent-7cdfb65dc4-45bx2
Namespace:        elasticsearch
Priority:         0
Service Account:  elastic-agent
Node:             server/43.xxx.xxx.xxx
Start Time:       Sun, 28 Dec 2025 06:37:35 +0000
Labels:           agent.k8s.elastic.co/name=fleet-server
                  agent.k8s.elastic.co/version=8.19.9
                  common.k8s.elastic.co/type=agent
                  pod-template-hash=7cdfb65dc4
Annotations:      agent.k8s.elastic.co/config-hash: 3625667547
                  cni.projectcalico.org/containerID: 369d84500e0e3e80c70a06c56a53745e20d3c74914d2f40496df3e8f33a69549
                  cni.projectcalico.org/podIP: 10.1.206.220/32
                  cni.projectcalico.org/podIPs: 10.1.206.220/32
Status:           Running
IP:               10.1.206.220
IPs:
  IP:           10.1.206.220
Controlled By:  ReplicaSet/fleet-server-agent-7cdfb65dc4
Containers:
  agent:
    Container ID:  containerd://3ab7988cc30faf4b97e31c2d37e63cc45e9e0923aca3fed65062c204f8123b59
    Image:         harbor.xxx.com/elastic/beats/elastic-agent:8.19.9
    Image ID:      harbor.xxx.com/elastic/beats/elastic-agent@sha256:587dda60190cbad6602ff32462126521912879f5a2ef7c5ec09fb525ab000c98
    Port:          8220/TCP
    Host Port:     0/TCP
    Command:
      /usr/bin/env
      bash
      -c
      #!/usr/bin/env bash
      set -e
      if [[ -f /mnt/elastic-internal/elasticsearch-association/elasticsearch/elasticsearch-cluster/certs/ca.crt ]]; then
        if [[ -f /usr/bin/update-ca-trust ]]; then
          cp /mnt/elastic-internal/elasticsearch-association/elasticsearch/elasticsearch-cluster/certs/ca.crt /etc/pki/ca-trust/source/anchors/
          /usr/bin/update-ca-trust
        elif [[ -f /usr/sbin/update-ca-certificates ]]; then
          cp /mnt/elastic-internal/elasticsearch-association/elasticsearch/elasticsearch-cluster/certs/ca.crt /usr/local/share/ca-certificates/
          /usr/sbin/update-ca-certificates
        fi
      fi
      /usr/bin/tini -- /usr/local/bin/docker-entrypoint -e

    State:          Running
      Started:      Sun, 28 Dec 2025 06:37:36 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  1Gi
    Requests:
      cpu:     200m
      memory:  1Gi
    Environment:
      FLEET_CA:                         /usr/share/fleet-server/config/http-certs/ca.crt
      FLEET_ENROLL:                     true
      FLEET_ENROLLMENT_TOKEN:           <set to the key 'FLEET_ENROLLMENT_TOKEN' in secret 'fleet-server-agent-envvars'>  Optional: false
      FLEET_SERVER_CERT:                /usr/share/fleet-server/config/http-certs/tls.crt
      FLEET_SERVER_CERT_KEY:            /usr/share/fleet-server/config/http-certs/tls.key
      FLEET_SERVER_ELASTICSEARCH_CA:    /mnt/elastic-internal/elasticsearch-association/elasticsearch/elasticsearch-cluster/certs/ca.crt
      FLEET_SERVER_ELASTICSEARCH_HOST:  https://elasticsearch-cluster-es-http.elasticsearch.svc:9200
      FLEET_SERVER_ENABLE:              true
      FLEET_SERVER_POLICY_ID:           eck-fleet-server
      FLEET_SERVER_SERVICE_TOKEN:       AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL2VsYXN0aWNzZWFyY2hfZmxlZXQtc2VydmVyX2Y1NzVmZGJiLTk2MDUtNDY0Yi1iNTM0LWJlMTY5YzQyYjg2NjptOUZvUUVPZEJlYjE0MkVhRHlMdFNnR2lvM0tZQndyUjAwVnphWlR2MDhuZWZvYnB5blp3QVpTb3hFOXpwV2JM
      FLEET_URL:                        https://fleet-server-agent-http.elasticsearch.svc:8220
      CONFIG_PATH:                      /usr/share/elastic-agent
      NODE_NAME:                         (v1:spec.nodeName)
    Mounts:
      /etc/agent.yml from config (ro,path="agent.yml")
      /mnt/elastic-internal/elasticsearch-association/elasticsearch/elasticsearch-cluster/certs from elasticsearch-certs (ro)
      /usr/share/elastic-agent/state from agent-data (rw)
      /usr/share/fleet-server/config/http-certs from fleet-certs (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n28qx (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  agent-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/elastic-agent/elasticsearch/fleet-server/state
    HostPathType:  DirectoryOrCreate
  config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  fleet-server-agent-config
    Optional:    false
  elasticsearch-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  fleet-server-agent-es-elasticsearch-elasticsearch-cluster-ca
    Optional:    false
  elasticsearch-certs-0:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  fleet-server-agent-es-elasticsearch-elasticsearch-cluster-ca
    Optional:    false
  fleet-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  fleet-server-agent-http-certs-internal
    Optional:    false
  kube-api-access-n28qx:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  17s   default-scheduler  Successfully assigned elasticsearch/fleet-server-agent-7cdfb65dc4-45bx2 to server
  Normal  Pulled     17s   kubelet            Container image "harbor.xxx.com/elastic/beats/elastic-agent:8.19.9" already present on machine
  Normal  Created    17s   kubelet            Created container: agent
  Normal  Started    17s   kubelet            Started container agent

There's too much content, I'll continue to add more in the next post.

Apply agent:

app@dev01:~/sr590/k8s$ kubectl apply -f elasticsearch/elastic-agent.yaml
agent.agent.k8s.elastic.co/elastic-agent created

Describe agnet:

app@dev01:~/sr590/k8s$ kubectl describe pod elastic-agent-agent-z94s2 -n elasticsearch
Name:             elastic-agent-agent-z94s2
Namespace:        elasticsearch
Priority:         0
Service Account:  elastic-agent
Node:             server/43.xxx.xxx.xxx
Start Time:       Sun, 28 Dec 2025 06:44:00 +0000
Labels:           agent.k8s.elastic.co/name=elastic-agent
                  agent.k8s.elastic.co/version=8.19.9
                  common.k8s.elastic.co/type=agent
                  controller-revision-hash=7bc5bd77df
                  pod-template-generation=1
Annotations:      agent.k8s.elastic.co/config-hash: 1135966440
                  cni.projectcalico.org/containerID: ad9b8c6792719e9c4ea66cb3eb2e097c8876e97714b24752d1f5838d0c091207
                  cni.projectcalico.org/podIP: 10.1.206.224/32
                  cni.projectcalico.org/podIPs: 10.1.206.224/32
Status:           Running
IP:               10.1.206.224
IPs:
  IP:           10.1.206.224
Controlled By:  DaemonSet/elastic-agent-agent
Containers:
  agent:
    Container ID:  containerd://8bdca147a68e2f2a65d781759e7bc6a5980ea3b7c72aeb0c99d59da9218024c2
    Image:         harbor.xxx.com/elastic/beats/elastic-agent:8.19.9
    Image ID:      harbor.xxx.com/elastic/beats/elastic-agent@sha256:587dda60190cbad6602ff32462126521912879f5a2ef7c5ec09fb525ab000c98
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/bin/env
      bash
      -c
      #!/usr/bin/env bash
      set -e
      if [[ -f /mnt/elastic-internal/elasticsearch-association/elasticsearch/elasticsearch-cluster/certs/ca.crt ]]; then
        if [[ -f /usr/bin/update-ca-trust ]]; then
          cp /mnt/elastic-internal/elasticsearch-association/elasticsearch/elasticsearch-cluster/certs/ca.crt /etc/pki/ca-trust/source/anchors/
          /usr/bin/update-ca-trust
        elif [[ -f /usr/sbin/update-ca-certificates ]]; then
          cp /mnt/elastic-internal/elasticsearch-association/elasticsearch/elasticsearch-cluster/certs/ca.crt /usr/local/share/ca-certificates/
          /usr/sbin/update-ca-certificates
        fi
      fi
      /usr/bin/tini -- /usr/local/bin/docker-entrypoint -e

    State:          Running
      Started:      Sun, 28 Dec 2025 06:44:00 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  1Gi
    Requests:
      cpu:     200m
      memory:  1Gi
    Environment:
      FLEET_CA:                /mnt/elastic-internal/fleetserver-association/elasticsearch/fleet-server/certs/ca.crt
      FLEET_ENROLL:            true
      FLEET_ENROLLMENT_TOKEN:  <set to the key 'FLEET_ENROLLMENT_TOKEN' in secret 'elastic-agent-agent-envvars'>  Optional: false
      FLEET_URL:               https://fleet-server-agent-http.elasticsearch.svc:8220
      CONFIG_PATH:             /usr/share/elastic-agent
      NODE_NAME:                (v1:spec.nodeName)
    Mounts:
      /etc/agent.yml from config (ro,path="agent.yml")
      /mnt/elastic-internal/elasticsearch-association/elasticsearch/elasticsearch-cluster/certs from elasticsearch-certs (ro)
      /mnt/elastic-internal/fleetserver-association/elasticsearch/fleet-server/certs from fleetserver-certs-1 (ro)
      /usr/share/elastic-agent/state from agent-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mp9w8 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  agent-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elastic-agent-agent-config
    Optional:    false
  elasticsearch-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  fleet-server-agent-es-elasticsearch-elasticsearch-cluster-ca
    Optional:    false
  fleetserver-certs-1:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elastic-agent-agent-fleetserver-ca
    Optional:    false
  kube-api-access-mp9w8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  31s   default-scheduler  Successfully assigned elasticsearch/elastic-agent-agent-z94s2 to server
  Normal  Pulled     31s   kubelet            Container image "harbor.xxx.com/elastic/beats/elastic-agent:8.19.9" already present on machine
  Normal  Created    31s   kubelet            Created container: agent
  Normal  Started    31s   kubelet            Started container agent

Fleet server and Agent logs: (My account level does not allow me to add links. but log is too large, so i uploaded them to dropbox, and link is here)

<the domain of dropbox.com>/scl/fo/donwkkvwupncg2e7zipic/ABQvBPn6LYxCflx-3j7bRQ8?rlkey=yd3vhfy4ejxqc6c1n5xit7pa8&st=3sn6l6ir&dl=0

And agent log says:

Failed to dispatch action id "policy:eck-agent:1" of type "POLICY_CHANGE", error: validating Fleet client config: validating fleet client config: fail to communicate with Fleet Server API client hosts: all hosts failed: requester 0/1 to host https://fleet-server.elasticsearch.svc:9200/ errored: Get "https://fleet-server.elasticsearch.svc:9200/api/status?": lookup fleet-server.elasticsearch.svc on 10.152.183.10:53: no such host"

But you can see my kibana.yaml at the root post, the fleet_server.hosts has been modified.

It should be "fleet-server-agent-http.elasticsearch.svc:8220" … i think …

After you have the fleet server loaded

Go the

Kibana - Fleet - settings

And check the Fleet and Elasticsearch hosts:ports and make sure they are correct, they get pulled down by the policies.

See

Thank you for your reply. Here is my settings:

Another setting~

Oh I got it! The agents pulled down the same id policies but legacy version. I will try to setup the agents to pull the latest policies tomorrow.

:smiling_face_with_tear: I try to use the latest policies by changing the policies id, but it still not work, and the fleet-server even failed to start.

First

kubectl delete -f elasticsearch/elastic-agent.yaml -f elasticsearch/kibana.yaml -f elasticsearch/fleet-server.yaml

Second

Change policies id. Now they are suffix with “-2”

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana-cluster
  namespace: elasticsearch
spec:
  version: 8.19.9
  count: 1
  elasticsearchRef:
    name: elasticsearch-cluster
  config:
    xpack.fleet.agents.elasticsearch.hosts: ["https://elasticsearch-cluster-es-http.elasticsearch.svc:9200"]
    xpack.fleet.agents.fleet_server.hosts: ["https://fleet-server-agent-http.elasticsearch.svc:8220"]
    xpack.fleet.packages:
      - name: system
        version: latest
      - name: elastic_agent
        version: latest
      - name: fleet_server
        version: latest
    xpack.fleet.agentPolicies:
      - name: Fleet Server on ECK policy 2 # <---- Change is here
        id: eck-fleet-server-2             # <---- Change is here
        namespace: elasticsearch
        is_managed: true
        monitoring_enabled:
          - logs
        unenroll_timeout: 900
        package_policies:
        - name: fleet_server-2 # <---- Change is here
          id: fleet_server-2   # <---- Change is here
          package:
            name: fleet_server
      - name: Elastic Agent on ECK policy 2 # <---- Change is here
        id: eck-agent-2                     # <---- Change is here
        namespace: elasticsearch
        is_managed: true
        monitoring_enabled:
          - logs
        unenroll_timeout: 900
        package_policies:
          - name: system-2 # <---- Change is here
            id: system-2   # <---- Change is here
            package:
              name: system
  podTemplate:
    spec:
      imagePullSecrets:
      - name: harbor
      containers:
      - name: kibana
        resources:
          limits:
            memory: 4Gi
            cpu: 2
          requests:
            memory: 2Gi
            cpu: 1
apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: fleet-server
  namespace: elasticsearch
spec:
  version: 8.19.9
  elasticsearchRefs:
  - name: elasticsearch-cluster
  kibanaRef:
    name: kibana-cluster
  mode: fleet
  fleetServerEnabled: true
  policyID: eck-fleet-server-2 # <---- Change is here
  deployment:
    replicas: 1
    podTemplate:
      spec:
        serviceAccountName: elastic-agent
        automountServiceAccountToken: true
        securityContext:
          runAsUser: 0
        imagePullSecrets:
          - name: harbor
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: elastic-agent
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - pods
  - nodes
  - namespaces
  verbs:
  - get
  - watch
  - list
- apiGroups: ["coordination.k8s.io"]
  resources:
  - leases
  verbs:
  - get
  - create
  - update
- apiGroups: ["apps"]
  resources:
  - replicasets
  verbs:
  - list
  - watch
- apiGroups: ["batch"]
  resources:
  - jobs
  verbs:
  - list
  - watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elastic-agent
  namespace: elasticsearch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: elastic-agent
subjects:
- kind: ServiceAccount
  name: elastic-agent
  namespace: elasticsearch
roleRef:
  kind: ClusterRole
  name: elastic-agent
  apiGroup: rbac.authorization.k8s.io

Then

# Exec first
kubectl apply -f elasticsearch/kibana.yaml 
# exec second
kubectl apply -f elasticsearch/fleet-server.yaml 

Fleet server log ends with:

Error: fleet-server failed: timed out waiting for Fleet Server to start after 2m0s
For help, please see our troubleshooting guide at https://www.elastic.co/guide/en/fleet/8.19/fleet-troubleshooting.html
Error: enrollment failed: exit status 1
For help, please see our troubleshooting guide at https://www.elastic.co/guide/en/fleet/8.19/fleet-troubleshooting.html

Full log is here:

# Please replace "_dropdown_" ...
_dropdown_/scl/fi/l90t6efu0mgq1u7g2ydt9/fleet-2.log?rlkey=149vh2he7g4d6s3q6hevu8on4&st=v63gnlyk&dl=0

If you look at the logs and search for error or warn
It looks like your elasticsearch output is not set correctly

error.message":"dial tcp [::1]:9200: connect: connection refused"...

It looks like it is trying to reach tcp [::1]:9200 "localhost:9200"

Same as the fleet host is the Elasticsearch Host Set Correctly in the Kibana - Fleet settings

{"log.level":"error","@timestamp":"2025-12-29T05:08:50.372Z","message":"failed to fetch elasticsearch version","component":{"binary":"fleet-server","dataset":"elastic_agent.fleet_server","id":"fleet-server-default","type":"fleet-server"},"log":{"source":"fleet-server-default"},"ecs.version":"1.6.0","service.name":"fleet-server","service.type":"fleet-server","error.message":"dial tcp [::1]:9200: connect: connection refused","ecs.version":"1.6.0"}

{"log.level":"warn","@timestamp":"2025-12-29T05:08:50.372Z","message":"Failed Elasticsearch output configuration test, using bootstrap values.","component":{"binary":"fleet-server","dataset":"elastic_agent.fleet_server","id":"fleet-server-default","type":"fleet-server"},"log":{"source":"fleet-server-default"},"ecs.version":"1.6.0","service.name":"fleet-server","service.type":"fleet-server","error.message":"dial tcp [::1]:9200: connect: connection refused","output":{"Elasticsearch":{"Headers":null,"Hosts":["localhost:9200"],"MaxConnPerHost":128,"MaxContentLength":104857600,"MaxRetries":3,"Path":"","Protocol":"https","ProxyDisable":false,"ProxyHeaders":{},"ProxyURL":"","ServiceToken":"[redacted]","ServiceTokenPath":"","TLS":{"CASha256":null,"CATrustedFingerprint":"","CAs":["/mnt/elastic-internal/elasticsearch-association/elasticsearch/elasticsearch-cluster/certs/ca.crt"],"Certificate":{"Certificate":"","Key":"","Passphrase":"","PassphrasePath":""},"CipherSuites":null,"CurveTypes":null,"Enabled":null,"Renegotiation":"never","VerificationMode":"full","Versions":null},"Timeout":90000000000},"Extra":null},"ecs.version":"1.6.0"}

This Warning Log is explicit... "Hosts":["localhost:9200"]

{"log.level":"warn","@timestamp":"2025-12-29T05:08:50.372Z",
"message":"Failed Elasticsearch output configuration test, using bootstrap values.",
"component":{"binary":"fleet-server","dataset":"elastic_agent.fleet_server","id":"fleet-server-default","type":"fleet-server"},
"log":{"source":"fleet-server-default"},"ecs.version":"1.6.0","service.name":"fleet-server","service.type":"fleet-server",
"error.message":"dial tcp [::1]:9200: connect: connection refused", <<<< HERE
"output":{"Elasticsearch":{"Headers":null,"Hosts":["localhost:9200"], <<<< HERE


....

Curious if you used our documentation or something else? ...

Pretty Sure if you just followed the quickstart carefully this should all work.

If I get a chance I will try...

Yes, i used this quickstart you say. but i'm not direct copy and paste, i modified the fleet-server host to a wrong address. :smiling_face_with_tear: so this post begin is here.

And now i have give up. i deleted all resources in my elasticsearch namespace, include the elasticsearch instance. and check my manifest all right, then re-apply them. It works now.

I think the configuration stored in somewhere starts wrong because my modification, and can not be fixed thought the subsequently yaml modification and delete~ apply~ delete~ apply~.

I tried to find the origin configuration data, i checked elasticsearch indexes and data stream, finally I couldn't find them.

So, this is the finally question of this post :joy:

Where the fleet-server configuration stored in? You see my kibana yaml already changed and applied, and the fleet-server hosts shown on the UI are right too. So, where does fleet-server agent read the configuration from?

1 Like

They are stored as as any other kubernetes manifests

I am not familiar with microk8s

The manifests are stored in kubernetes not Elasticsearch.

Eventually the configurations are loaded into elastic such as those fleet endpoints but you can not access them directly you would need to use the REST API

I don't know if it's the order of operations or that kubernetes environment is different. But I always do delete and reapply not just overwrite when I'm debugging.

Not sure what to tell you