Our applications are deployed in AWS EKS cluster, and for certain reasons we need to write our app logs to separate file lets say ${POD_NAME}.applog instead of stdout (we mounted /var/log/container/ to the pod /log folder and app writes /log/${POD_NAME}.applog ). And we are using filebeat to send the logs to Elasticsearch and we are using Kibana for visualization. Our filebeat config file looks like this
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
paths:
- /var/log/containers/*.applog
json.keys_under_root: true
json.message_key: log
processors:
- add_cloud_metadata:
- add_host_metadata:
This is working fine, but we realised we are missing the kuberenetes metadata in ES and Kibana. But we are getting kuberenetes metadata when we include -type: conatainer
.
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
paths:
- /var/log/containers/*.applog
json.keys_under_root: true
json.message_key: log
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
So we tried adding the config like this
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
paths:
- /var/log/containers/*.applog
json.keys_under_root: true
json.message_key: log
processors:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
- add_cloud_metadata:
- add_host_metadata:
Still we are not getting the kuberenetes metadata in kibana. I tried with all trial and error method, but nothing works.
Can someone please help me how to get Kubernetes metadata with custom logfile in filebeat.