New filebeat instance does not add docker metadata to logs in elastic

Hi there elastic community!

This is my first post here, I hope it is according to standards.

Recently, I've stumbled into a mysterious issue whose solution I cannot find. It is about docker metadata not being added to logs sent by filebeat to elasticsearch. I will do my best to provide a clear explanation, but please ask for further details when needed.

In short:

My issue is that filebeat docker metadata generation is failing for one set that sends to eck, while one other in parallel (sending logs to elastic cloud) does succesfully add metadata

in long:

I've recently setup ECK on as Azure Kubernetes cluster (v1.17.7) and have configured filebeat to ship specific logs from my cluster to Elastic (on kubernetes). I have several inputs, some using docker metadata to decide if they will be collected or not. That is where it is going wrong.

The relevant part of my filebeat.yml is this:

- type: container
  paths:
    - /var/lib/docker/containers/*/*.log
  processors:
    - add_kubernetes_metadata:
        in_cluster: true
        host: ${NODE_NAME}
        matchers:
        - logs_path:
            logs_path: "/var/lib/docker/containers/*/"
    - add_docker_metadata: ~
    - dissect:
        tokenizer: "dev-%{Peer}.%{Org}.pivt.com-%{Chaincode}-%{Version}"
        field: "container.name"
        target_prefix: ""
    - drop_event:
        when:
          not:
            regexp:
              container.name: "^dev-.*"
  ignore_older: ${IGNORE_OLDER}
  tags: ["chaincode"]

So I'm after the sub-set of logs in /var/lib/docker/containers/*/*.log whose container name starts with ' dev-', but kibana is not showing me any docker metadata for those containers' logs. When I remove the drop_event part, the logs are coming through (along with many I don't want), but the ones I do want are missing the container metadata, as can be seen here:


The weird thing is that all other container logs (of containers whose names do not start with ' dev-') ARE showing docker metadata in kibana.

Next to this setup, on the same cluster, I'm running another instance of filebeat which uses the same input definitions and sending them to elastic-cloud (so not eck for this one) and those DO show the container metadata corectly for the logs I'm after:

Attempts at finding the issue

On another cluster, I deleted my filebeat instances that ship to elastic cloud, so that the eck-version will be the only one colecting logs from /var/lib/docker/containers/*/ , but this did not make any difference, unfortunately

System details

platform: Kubernetes version 1.17.7 on Azure (AKS)
eck-version: helm apply-all of eck v1.3.0 (I also tried this with v1.2.1)
elastic/kibana/filebeat verisons: all v7.9.3

I hope above is clear and also hope one of you has a suggestion to sort out this weird issue.

kind regards and thanks up front,

Ludo

Hi Ludo, for the problematic Filebeat instance (the one that's not including the docker metadata), could you temporarily disable output.elasticsearch and enable output.console or output.file and then look at the events written to this output? This would tell us whether Filebeat is including the Docker metadata in the events it's generating or not, and help us narrow down the source of the problem.

Thanks,

Shaunak

Hi there shaunak,

Thanks for your response, I changed the output and have a line that should contain the docker metedata, and had to dig a bit to find the relevant log lines:

I switched filebeat output to file and inspected that, with below results ( I made some text replecements, which might have changed the json scheme a bit):

{"@timestamp":"2020-11-26T07:35:45.647Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.3"},"input":{"type":"container"},"agent":{"type":"filebeat","version":"7.9.3","hostname":"project-eck-filebeat-filebeat-hbjwz","ephemeral_id":"d955ea7d-e1d4-43d1-be8d-bd4f1917da59","id":"95808bf2-4248-4197-9bd7-9e3748a9e1ac","name":"project-eck-filebeat-filebeat-hbjwz"},"ecs":{"version":"1.5.0"},"host":{"name":"project-eck-filebeat-filebeat-hbjwz"},"log":{"file":{"path":"/var/lib/docker/containers/3523704f1862a3835621b3c0dabd33bee8d3160592982ae5bee7f7a78195fe87/3523704f1862a3835621b3c0dabd33bee8d3160592982ae5bee7f7a78195fe87-json.log"},"offset":90443},"stream":"stdout","message":"2020-11-26T07:35:45.644Z \u001b[32minfo\u001b[39m [shim:chaincode]                                    smallValueTransferCheck outcome: undefined  ","tags":["chaincode"]}
{"@timestamp":"2020-11-26T07:35:51.929Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.3"},"log":{"file":{"path":"/var/lib/docker/containers/fd7c7a10b13a6c9c3f597e9317788e45a86586dfcb92acac135bddd4f0f1c1db/fd7c7a10b13a6c9c3f597e9317788e45a86586dfcb92acac135bddd4f0f1c1db-json.log"},"offset":8548},"stream":"stdout","message":"2020-11-26T07:35:51.929Z \u001b[32minfo\u001b[39m [shim:chaincode]                                    Invoke function [object Object]  ","tags":["chaincode"],"input":{"type":"container"},"host":{"name":"project-eck-filebeat-filebeat-hbjwz"},"agent":{"name":"project-eck-filebeat-filebeat-hbjwz","type":"filebeat","version":"7.9.3","hostname":"project-eck-filebeat-filebeat-hbjwz","ephemeral_id":"d955ea7d-e1d4-43d1-be8d-bd4f1917da59","id":"95808bf2-4248-4197-9bd7-9e3748a9e1ac"},"ecs":{"version":"1.5.0"}}
{"@timestamp":"2020-11-26T07:35:43.477Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.3"},"agent":{"name":"project-eck-filebeat-filebeat-hbjwz","type":"filebeat","version":"7.9.3","hostname":"project-eck-filebeat-filebeat-hbjwz","ephemeral_id":"d955ea7d-e1d4-43d1-be8d-bd4f1917da59","id":"95808bf2-4248-4197-9bd7-9e3748a9e1ac"},"ecs":{"version":"1.5.0"},"log":{"offset":88129,"file":{"path":"/var/lib/docker/containers/3523704f1862a3835621b3c0dabd33bee8d3160592982ae5bee7f7a78195fe87/3523704f1862a3835621b3c0dabd33bee8d3160592982ae5bee7f7a78195fe87-json.log"}},"stream":"stdout","message":"2020-11-26T07:35:43.477Z \u001b[32minfo\u001b[39m [shim:lib/handler.js]                               [p0001-032b02a4] Calling chaincode Invoke() succeeded. Sending COMPLETED message back to peer  ","tags":["chaincode"],"input":{"type":"container"},"host":{"name":"project-eck-filebeat-filebeat-hbjwz"}}
{"@timestamp":"2020-11-26T07:35:45.600Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.3"},"agent":{"version":"7.9.3","hostname":"project-eck-filebeat-filebeat-hbjwz","ephemeral_id":"d955ea7d-e1d4-43d1-be8d-bd4f1917da59","id":"95808bf2-4248-4197-9bd7-9e3748a9e1ac","name":"project-eck-filebeat-filebeat-hbjwz","type":"filebeat"},"ecs":{"version":"1.5.0"},"host":{"name":"project-eck-filebeat-filebeat-hbjwz"},"stream":"stdout","message":"2020-11-26T07:35:45.591Z \u001b[32minfo\u001b[39m [shim:chaincode]                                    Retrieving valid <...> table with type parameters  ","log":{"file":{"path":"/var/lib/docker/containers/5452dd3f6b32fbaea547452d689deae82ab1a1d6d7d7f06bc47049a8c574909b/5452dd3f6b32fbaea547452d689deae82ab1a1d6d7d7f06bc47049a8c574909b-json.log"},"offset":19131},"tags":["chaincode"],"input":{"type":"container"}}
{"@timestamp":"2020-11-26T07:35:39.288Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.3"},"stream":"stdout","message":"2020-11-26T07:35:39.286Z \u001b[32minfo\u001b[39m [shim:chaincode]                                    invoking/querying function {\"fcn\":\"updateSharedData\",\"params\":["chaincode"],"input":{"type":"container"},"ecs":{"version":"1.5.0"},"host":{"name":"project-eck-filebeat-filebeat-hbjwz"},"agent":{"type":"filebeat","version":"7.9.3","hostname":"project-eck-filebeat-filebeat-hbjwz","ephemeral_id":"d955ea7d-e1d4-43d1-be8d-bd4f1917da59","id":"95808bf2-4248-4197-9bd7-9e3748a9e1ac","name":"project-eck-filebeat-filebeat-hbjwz"},"log":{"offset":83528,"file":{"path":"/var/lib/docker/containers/3523704f1862a3835621b3c0dabd33bee8d3160592982ae5bee7f7a78195fe87/3523704f1862a3835621b3c0dabd33bee8d3160592982ae5bee7f7a78195fe87-json.log"}}}
{"@timestamp":"2020-11-26T07:35:38.190Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.3"},"kubernetes":{"statefulset":{"name":"hlf-<...>-peer0"},"labels":{"app":"hlf-<...>-peer0","controller-revision-hash":"hlf-<...>-peer0-686597b4fb","name":"hlf-peer","org":"hlf-peer","statefulset_kubernetes_io/pod-name":"hlf-peer--<...>peer0-0"},"container":{"name":"peer","image":"hyperledger/fabric-peer:1.4.7"},"node":{"name":"aks-agentpool-19422743-0"},"pod":{"name":"<...>--peer0-0","uid":"c7e29fe9-936c-49af-b14a-59d38b0a7afb"},"namespace":"hlf"},"ecs":{"version":"1.5.0"},"tags":["chaincode"],"container":{"image":{"name":"hyperledger/fabric-peer:1.4.7"},"id":"f5ebad3aa6f4e159637dd2ae04fcf5e490fd3d70cbeb4d58f87107d96dcddd31","runtime":"docker"},"log":{"file":{"path":"/var/lib/docker/containers/f5ebad3aa6f4e159637dd2ae04fcf5e490fd3d70cbeb4d58f87107d96dcddd31/f5ebad3aa6f4e159637dd2ae04fcf5e490fd3d70cbeb4d58f87107d96dcddd31-json.log"},"offset":723565},"input":{"type":"container"},"host":{"name":"project-eck-filebeat-filebeat-hbjwz"},"agent":{"version":"7.9.3","hostname":"project-eck-filebeat-filebeat-hbjwz","ephemeral_id":"d955ea7d-e1d4-43d1-be8d-bd4f1917da59","id":"95808bf2-4248-4197-9bd7-9e3748a9e1ac","name":"project-eck-filebeat-filebeat-hbjwz","type":"filebeat"},"stream":"stderr","message":"\u001b[34m2020-11-26 07:35:38.187 UTC [comm.grpc.server] 1 -> INFO 7b0\u001b[0m unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.244.3.49:53770 grpc.code=OK grpc.call_duration=85.107901ms"}
{"@timestamp":"2020-11-26T07:34:31.559Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.3"},"input":{"type":"container"},"ecs":{"version":"1.5.0"},"host":{"name":"project-eck-filebeat-filebeat-hbjwz"},"stream":"stderr","message":"2020-11-26T07:34:31.556Z\tERROR\t[kubernetes]\tadd_kubernetes_metadata/matchers.go:91\tError extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/*/'.","tags":["chaincode"],"container":{"image":{"name":"docker.elastic.co/beats/filebeat:7.9.3"},"id":"7f13761cb517676f1fa910bf99f5c7d7a12e8bc77b140f9b914a297ff4c3d0bb","runtime":"docker"},"kubernetes":{"namespace":"elastic-system","labels":{"controller-revision-hash":"56b8766d7c","heritage":"Tiller","pod-template-generation":"14","release":"project-eck-filebeat","app":"project-eck-filebeat-filebeat","chart":"filebeat-7.9.3"},"container":{"name":"filebeat","image":"docker.elastic.co/beats/filebeat:7.9.3"},"node":{"name":"aks-agentpool-19422743-0"},"pod":{"name":"project-eck-filebeat-filebeat-hbjwz","uid":"a6f5b82d-dea4-4ef9-a0da-c0c4d5027cea"}},"agent":{"version":"7.9.3","hostname":"project-eck-filebeat-filebeat-hbjwz","ephemeral_id":"d955ea7d-e1d4-43d1-be8d-bd4f1917da59","id":"95808bf2-4248-4197-9bd7-9e3748a9e1ac","name":"project-eck-filebeat-filebeat-hbjwz","type":"filebeat"},"log":{"offset":31086509,"file":{"path":"/var/lib/docker/containers/7f13761cb517676f1fa910bf99f5c7d7a12e8bc77b140f9b914a297ff4c3d0bb/7f13761cb517676f1fa910bf99f5c7d7a12e8bc77b140f9b914a297ff4c3d0bb-json.log"}}}
{"@timestamp":"2020-11-26T07:34:31.559Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.9.3"},"kubernetes":{"pod":{"name":"project-eck-filebeat-filebeat-hbjwz","uid":"a6f5b82d-dea4-4ef9-a0da-c0c4d5027cea"},"namespace":"elastic-system","labels":{"chart":"filebeat-7.9.3","controller-revision-hash":"56b8766d7c","heritage":"Tiller","pod-template-generation":"14","release":"project-eck-filebeat","app":"project-eck-filebeat-filebeat"},"container":{"name":"filebeat","image":"docker.elastic.co/beats/filebeat:7.9.3"},"node":{"name":"aks-agentpool-19422743-0"}},"agent":{"type":"filebeat","version":"7.9.3","hostname":"project-eck-filebeat-filebeat-hbjwz","ephemeral_id":"d955ea7d-e1d4-43d1-be8d-bd4f1917da59","id":"95808bf2-4248-4197-9bd7-9e3748a9e1ac","name":"project-eck-filebeat-filebeat-hbjwz"},"host":{"name":"project-eck-filebeat-filebeat-hbjwz"},"ecs":{"version":"1.5.0"},"log":{"offset":31086797,"file":{"path":"/var/lib/docker/containers/7f13761cb517676f1fa910bf99f5c7d7a12e8bc77b140f9b914a297ff4c3d0bb/7f13761cb517676f1fa910bf99f5c7d7a12e8bc77b140f9b914a297ff4c3d0bb-json.log"}},"stream":"stderr","message":"2020-11-26T07:34:31.556Z\tERROR\t[kubernetes]\tadd_kubernetes_metadata/matchers.go:91\tError extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/*/'.","tags":["chaincode"],"input":{"type":"container"},"container":{"runtime":"docker","image":{"name":"docker.elastic.co/beats/filebeat:7.9.3"},"id":"7f13761cb517676f1fa910bf99f5c7d7a12e8bc77b140f9b914a297ff4c3d0bb"}}

the last three lines show logs that I want to filter out, it looks like they have more data fields than the ones in the earlier lines.
Am I correct in identifying missing container metada here?

Hi @LudoNew and welcome!

I'm seeing the above error which is definitely related to add_k8s_metedata processor. However it is for another container and not for the one you are mentioning. Do you see other similar errors in the output?

Also you might want to try with autodiscover instead which by default adds k8s+docker metadata.
An example config can be found at https://github.com/elastic/beats/blob/master/deploy/kubernetes/filebeat-kubernetes.yaml#L22-L31

Let me know about any updates!

Hi there, thanks for your response!

Those Error logs are only from the filebeat containers, which I do not care much for, unfortunately not much error logs there.
It has been quite a challenge to from raw output find the logs I am interested in, as A LOT is logged to /var/lib/docker/containers/*/ :slight_smile:

Autodiscover I haven't tried out yet, I will check that out (will be after christmas, lots to finish before then).

I do have a question: Do you know if the presence of kubernetes metadata has any effect on the functioning of add_docker_meta?

No, add_k8s_metadata is a different thing. It will retrieve metadata from k8s api while add_docker_metadata will retrieve metadata from docker api. The only implication could be that both write on the same container.* fields which should not be an issue since the 2 resources should be identical, for instance container.name should be unique no mater if it was retrieved from k8s api or docker api.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.