Hello All , Recently facing a problem of inconsistent filebeat state, when I started using two filebeat daemonsets on my kubernetes cluster. for reference here I am attatching my two filebeat configurations -
providers:
- type: kubernetes
hints.enabled: true
include_annotations: ["artifact.spinnaker.io/name", "ad.datadoghq.com/tags"]
include_labels: ["app"]
labels.dedot: true
annotations.dedot: true
templates:
- condition:
and:
- equals:
kubernetes.namespace: default
- or:
- equals:
kubernetes.container.name: "cnc-consumer"
- equals:
kubernetes.container.name: "mint-queue"
- equals:
kubernetes.container.name: "sol-claim-settlement-queue"
- equals:
kubernetes.container.name: "user-initiate-consumer"
- equals:
kubernetes.container.name: "refund-consumer"
- equals:
kubernetes.container.name: "cnc-consumer-pact"
- equals:
kubernetes.container.name: "mint-queue-pact"
- equals:
kubernetes.container.name: "order-initiate-consumer-pact"
- equals:
kubernetes.container.name: "refund-consumer-pact"
- equals:
kubernetes.container.name: "user-initiate-consumer-pact"
config:
- type: container
json.overwrite_keys: true
json.add_error_key: true
json.keys_under_root: true
json.message_key: target
paths:
- /var/log/containers/*-${data.kubernetes.container.id}.log
processors:
- decode_json_fields:
fields: ["span", "fields"]
target: ""
overwrite_keys: true
filebeat.inputs:
- type: log
paths:
- /usr/share/filebeat/logs/filebeat
this filebeat is used to output my logs to logstash and is newly created where I am facing error of -
Error creating runner from config: Can only start │
│ an input when all related states are finished
whereas for my second filebeat daemonset which was deployed earlier everything works fine and there is no drop of logs seen there , here is the configuration of it -
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
include_annotations: ["artifact.spinnaker.io/name", "ad.datadoghq.com/tags"]
include_labels: ["app"]
labels.dedot: true
annotations.dedot: true
templates:
- condition:
and:
- equals:
kubernetes.namespace: default
- not:
or:
- equals:
kubernetes.container.name: "cnc-consumer"
- equals:
kubernetes.container.name: "mint-queue"
- equals:
kubernetes.container.name: "sol-claim-settlement-queue"
- equals:
kubernetes.container.name: "user-initiate-consumer"
- equals:
kubernetes.container.name: "refund-consumer"
- equals:
kubernetes.container.name: "cnc-consumer-pact"
- equals:
kubernetes.container.name: "mint-queue-pact"
- equals:
kubernetes.container.name: "order-initiate-consumer-pact"
- equals:
kubernetes.container.name: "refund-consumer-pact"
- equals:
kubernetes.container.name: "user-initiate-consumer-pact"
config:
- type: container
json.overwrite_keys: true
json.add_error_key: true
json.keys_under_root: true
json.message_key: message
paths:
- /var/log/containers/*-${data.kubernetes.container.id}.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
processors:
- decode_json_fields:
fields: ["logger_payload"]
target: ""
overwrite_keys: true
filebeat.inputs:
- type: log
paths:
- /usr/share/filebeat/logs/filebeat
Not able to understand what is going wrong here , since I have separated containers on which these daemonsets work, why is there a hindrance in working of newly created filebeat-daemonset though the same containers logs work fine if I use my second filebeat daemonset