I am running filebeat in k8s cluster with one etcd/control plane and one worker nodes. My filebeat configuration looks as follows:
setup.ilm.enabled: false
filebeat.inputs:
- type: container
paths:
- /var/log/containers/mongodb-*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
json:
keys_under_root: true
overwrite_keys: true
add_error_key: true
ignore_decoding_error: true
processors:
- add_cloud_metadata:
- add_host_metadata:
output.elasticsearch:
hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
indices:
- index: filebeat-%{[agent.version]}-%{[kubernetes.container.name]}-%{+yyyy.MM.dd}
Basically I configured it to send logs of each container in each pod to dedicated index with kubernetes.container.name
property. All works fine when cluster is up and running, but when I restart both VMs filebeat does not get any k8s metadata at first and sends logs to default filebeat-%{[agent.version]}-%{+yyyy.MM.dd}
, which is not what I would expect from this configuration. Is it some sort of bug that I should report to https://github.com/elastic/beats or had I misconfigured it? Is it valid supported configuration of filebeat to send logs to dedicated indexes?