I have been trying to deal with the horrible event managment of metricbeat. I find it a terrible design that the main configuration is logged but when it comes to modules you need to run a special event run to see publish events in debug.
I have been trying to connect to kubernetes on 10250 but the Elastic documentation focuses on 10255 and when referring to bearer token, the example given is fictional. there is no /usr/share/secret in kuberentes. Maybe if this is an example you could give additional commands to rule out RBAC permissions issue? Either ways the logs never tell me anything. How can a system application not produce logs when warning or fatal connection happens? Sounds like bad programming in exception handling?
Lack of information is crippling development and I dread the when it comes to upgrading.... self design complexity?
I STRONGLY encourage Elastic to rethink how modules connect and if they fail to do so, maybe put it into the metricbeat log? I feel this is obvious that if there is a configuration error with one of the modules it should tell me..... irrelevant if I have bebug on.
My problems are,
I am using the kube.config file in the example for "outside cluster config" and I find that I am unable to even tell if its connecting or not... if it lacks permissions or not or really tell anything.
How do I get my metricbeat to collect data from kubelet??
Here is my config:
- module: kubernetes metricsets: - container - node - pod - system - volume period: 10s hosts: ["localhost:10250"] enabled: true in_cluster: false kube_config: /root/.kube/config
All components are 6.8.0 ..... and this setup is Metricbeat direct to elasticsearch with metricbeat installed on the system on container.