Hello,
Got Filebeat installed as a daemonset in Kubernetes cluster, to collect logs from different pods using hint autodiscovery. On filebeat logs we have plenty of errors of this type:
ERROR [autodiscover] cfgfile/list.go:99 Error creating runner from config: failed to initialize condition: missing or invalid condition
Sometimes we find we are not getting any logs from certain pods and there is no apparent reason. We wonder if this error could be the cause, could you explain the meaning of this error?
Our filebeat config looks like this:
data:
filebeat.yml: |-
filebeat.inputs:
- type: log
paths:
- /var/log/kube-apiserver-audit.log
fields:
audit: true
json:
add_error_key: true
- type: log
paths:
- /var/log/kube-apiserver.log
- /var/log/kube-controller-manager.log
- /var/log/kube-proxy.log
- /var/log/kube-scheduler.log
fields:
system: true
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config.enabled: false
default_config:
type: container
paths:
- /var/log/containers/*-${data.kubernetes.container.id}.log
close_inactive: 5m
include_annotations:
- elasticservice/cluster
- elasticservice/index
- elasticservice/pipeline
add_resource_metadata:
namespace:
enabled: true
logging:
level: info
metrics.enabled: false
http:
enabled: true
host: localhost
port: 5066
processors:
- drop_event:
when:
contains:
kubernetes.namespace: "xxx"
- drop_fields:
when:
has_fields: ['fields.audit']
fields:
- "json.requestObject.metadata.managedFields"
- "json.responseObject.metadata.managedFields"
ignore_missing: true
- add_fields:
target: ''
fields:
hosting:
type: k8s
name: elasticservice
output.logstash:
hosts:
- logstash.elastic-prod.svc:5044
pipelining: 0
ttl: 2m
worker: 4
bulk_max_size: 2000
slow_start: false