Filebeat is partially collecting logs

Hi

I'm having a funny issue with filebeat in kubernetes cluster.
Filebeat collects logs for some pods but it doesn't collect logs for all the other pods.
I don't see any noticeable error messages from filebeat log. Could it be i misconfigure something?

I recently updated to 6.7.1 from 6.6.2 to see if that would fix the issue, but it was no good.

These are my configMaps(filebeat-config):

apiVersion: v1
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        # Mounted `filebeat-inputs` configmap:
        path: ${path.config}/inputs.d/*.yml
        # Reload inputs configs as they change:
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false

# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
#  providers:
#    - type: kubernetes
#      hints.enabled: true

processors:
  - add_cloud_metadata:

#cloud.id: ${ELASTIC_CLOUD_ID}
#cloud.auth: ${ELASTIC_CLOUD_AUTH}

setup.template.name: "npr-01-%{[kubernetes.namespace]:filebeat}"
setup.template.pattern: "npr-01-%{[kubernetes.namespace]:filebeat}-*"

output.elasticsearch:
  hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
  username: '${ELASTICSEARCH_USERNAME:filebeat_internal}'
  password: '${ELASTICSEARCH_PASSWORD:password}'
  index: "npr-01-%{[kubernetes.namespace]:filebeat}-%{+yyyy.MM.dd}"

and filebeat-inputs:

apiVersion: v1
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
            labels.dedot: true
            annotations.dedot: true

and some log outputs from filebeat

|[filebeat-tqdcm] 2019-04-10T14:31:09.386Z|WARN|elasticsearch/client.go:539|Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x7f8fa71, ext:63690503468, loc:(*time.Location)(nil)}, Meta:common.MapStr(nil), Fields:common.MapStr{"offset":9060301, "stream":"stderr", "input":common.MapStr{"type":"docker"}, "prospector":common.MapStr{"type":"docker"}, "beat":common.MapStr{"hostname":"filebeat-tqdcm", "version":"6.7.1", "name":"filebeat-tqdcm"}, "host":common.MapStr{"name":"filebeat-tqdcm"}, "log":common.MapStr{"file":common.MapStr{"path":"/var/lib/docker/containers/a327ebe79a081b72d8f58e1678bb887ab7a29308e6339e2d6677a18e5a90371b/a327ebe79a081b72d8f58e1678bb887ab7a29308e6339e2d6677a18e5a90371b-json.log"}}, "message":"   Active: active (running) since Thu 2019-04-04 20:56:06 UTC; 5 days ago", "source":"/var/lib/docker/containers/a327ebe79a081b72d8f58e1678bb887ab7a29308e6339e2d6677a18e5a90371b/a327ebe79a081b72d8f58e1678bb887ab7a29308e6339e2d6677a18e5a90371b-json.log", "meta":common.MapStr{"cloud":common.MapStr{"region":"ca-central-1", "availability_zone":"ca-central-1a", "provider":"ec2", "instance_id":"i-0b3fe3cfe2e2ca1b4", "machine_type":"t2.small"}}}, Private:file.State{Id:"", Finished:false, Fileinfo:(*os.fileStat)(0xc4202248f0), Source:"/var/lib/docker/containers/a327ebe79a081b72d8f58e1678bb887ab7a29308e6339e2d6677a18e5a90371b/a327ebe79a081b72d8f58e1678bb887ab7a29308e6339e2d6677a18e5a90371b-json.log", Offset:9060445, Timestamp:time.Time{wall:0xbf2398aee4e7e416, ext:1220801720, loc:(*time.Location)(0x21eb640)}, TTL:-1, Type:"docker", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x180062, Device:0xca02}}}, Flags:0x1} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"illegal_argument_exception","reason":"Cannot write to a field alias [beat.name]."}}|
|---|---|---|---|
|[filebeat-9zxpp] 2019-04-10T14:47:19.571Z|WARN|elasticsearch/client.go:539|Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x2b44d5af, ext:63690504436, loc:(*time.Location)(nil)}, Meta:common.MapStr(nil), Fields:common.MapStr{"host":common.MapStr{"name":"filebeat-9zxpp"}, "meta":common.MapStr{"cloud":common.MapStr{"instance_id":"i-0895726462054147b", "provider":"ec2", "machine_type":"t2.small", "region":"ca-central-1", "availability_zone":"ca-central-1b"}}, "stream":"stderr", "message":"   Loaded: loaded (/lib/systemd/system/kubelet.service; static; vendor preset: enabled)", "prospector":common.MapStr{"type":"docker"}, "beat":common.MapStr{"version":"6.7.1", "name":"filebeat-9zxpp", "hostname":"filebeat-9zxpp"}, "source":"/var/lib/docker/containers/4825d6abe4b21a47b5073aa6353efb2712222e77a0039507fd49109f3c62de63/4825d6abe4b21a47b5073aa6353efb2712222e77a0039507fd49109f3c62de63-json.log", "offset":3579435, "log":common.MapStr{"file":common.MapStr{"path":"/var/lib/docker/containers/4825d6abe4b21a47b5073aa6353efb2712222e77a0039507fd49109f3c62de63/4825d6abe4b21a47b5073aa6353efb2712222e77a0039507fd49109f3c62de63-json.log"}}, "input":common.MapStr{"type":"docker"}}, Private:file.State{Id:"", Finished:false, Fileinfo:(*os.fileStat)(0xc4203252b0), Source:"/var/lib/docker/containers/4825d6abe4b21a47b5073aa6353efb2712222e77a0039507fd49109f3c62de63/4825d6abe4b21a47b5073aa6353efb2712222e77a0039507fd49109f3c62de63-json.log", Offset:3579593, Timestamp:time.Time{wall:0xbf2398ae1a2ab368, ext:1189849358, loc:(*time.Location)(0x21eb640)}, TTL:-1, Type:"docker", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x1801a8, Device:0xca02}}}, Flags:0x1} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"illegal_argument_exception","reason":"Cannot write to a field alias [beat.name]."}}|
1 Like

Hello, looking at the alias error are you sure you are not using a 7.0 mapping with a 6.7 beat?

Could you elaborate your question more?
There is a kubernetes cluster who used to run filebeat 7.0.0 alpha1 but now runs with v6.7.1. However, this k8s cluster never ran any 7.x filebeat.

Also, as i recall, i don't remember seeing any "breaking changes" that would affect my ES cluster when upgrading from 6.6.x to 6.7.x

Thank you

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.