Elastic Agents on Openshift cluster agent name set incorrectly to pod name

Hello all

In our environment I do have multiple Openshift/Kubernetes clusters.
We do use Observability and Security and configured multiple Fleet Policies to separate the different Operating Systems.
In addition we also monitor other Openshift clusters and have Elastic Agents running on every node of the cluster.

In Observability Infrastructure all the Openshift nodes and other systems are shown with the correct name EXCEPT for the "local cluster" where Elasticsearch, Kibana etc are installed.
In the latter case the name of the pod is displayed instead of the node name.

When I connect to the pod the contents of /etc/hostname are wrong
The environment variable NODE_NAME does have the correct value!!!

PS C:\Users\SomeUser> oc rsh elastic-agent-agent-7vpbm
sh-5.1# cat /etc/hostname
elastic-agent-agent-7vpbm
sh-5.1# env|sort|grep NODE_NAME
**NODE_NAME=cn2-prod4-kchwg-master-0**

Anyone a clue on how to change the installation manifests in order to have the name set correctly?

Regards Hans-Peter

Hi,

you can modify the Elastic Agent's configuration to use the NODE_NAME environment variable instead of the system's hostname. This can be done by setting the host value in the agent's configuration to ${NODE_NAME} .

Regards

Hello Yago
Well the Agent Pods are part of a DaemonSet which is owned by an Agent resource.
I have not been able to find out how I can change the daemonSet specification to achieve that.

If you can help that would be very welcome.

below the daemonSet spec

Regards Hans

daemonSet:
    podTemplate:
      metadata:
        creationTimestamp: null
      spec:
        automountServiceAccountToken: true
        containers:
        - name: agent
          resources:
................
          securityContext:
            privileged: true
            runAsUser: 0
          volumeMounts:
          - mountPath: /hostfs/proc
..................
        securityContext:
......
        serviceAccountName: elastic-agent
        tolerations:
        - operator: Exists
        volumes:
        - hostPath:
..............
    updateStrategy: {}
  fleetServerRef:
    name: fleet-server

This does not work:

  daemonSet:
    podTemplate:
      spec:
        tolerations:
        - operator: "Exists"
        serviceAccountName: elastic-agent
        automountServiceAccountToken: true
        securityContext:
          runAsUser: 0
        hostname: ${NODE_NAME}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.