In our environment I do have multiple Openshift/Kubernetes clusters.
We do use Observability and Security and configured multiple Fleet Policies to separate the different Operating Systems.
In addition we also monitor other Openshift clusters and have Elastic Agents running on every node of the cluster.
In Observability Infrastructure all the Openshift nodes and other systems are shown with the correct name EXCEPT for the "local cluster" where Elasticsearch, Kibana etc are installed.
In the latter case the name of the pod is displayed instead of the node name.
When I connect to the pod the contents of /etc/hostname are wrong
The environment variable NODE_NAME does have the correct value!!!
you can modify the Elastic Agent's configuration to use the NODE_NAME environment variable instead of the system's hostname. This can be done by setting the host value in the agent's configuration to ${NODE_NAME} .
Hello Yago
Well the Agent Pods are part of a DaemonSet which is owned by an Agent resource.
I have not been able to find out how I can change the daemonSet specification to achieve that.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.