Filebeat not reading all docker container logs

Filebeat is reading some docker container logs, but not all. I checked one of the log files that was being excluded and it has been updating recently but I can't see any information for that container in Kibana. There are also much fewer container logs available in Kibana than there are container log files.

 filebeat.inputs:
- type: docker
  combine_partial: true
  processors:
- add_docker_metadata: ~
  containers:
path: "/var/lib/docker/containers"
stream: "stdout"
ids:
  - "*"
  # Change to true to enable this input configuration.
  enabled: true

Is there any additional information that you need in order to respond to this concern?

Hi @elizajanus,

There seems to be some formatting issue with the configuration you pasted in your comment, indentation doesn't seem correct, just to confirm, a basic configuration to collect all logs and add the metadata would be like this:

filebeat.inputs:
- type: docker
  containers:
    ids:
      - "*"
  processors:
    - add_docker_metadata: ~

Apart of this, could you also check if there is any error in the filebeat logs?

Hi @jsoriano, I apologize, I must have messed up the formatting when I pasted. The indentation is correct in the file, though.

filebeat.inputs:
- type: docker
  combine_partial: true
  processors:
    - add_docker_metadata: ~
  containers:
    path: "/var/lib/docker/containers"
    stream: "stdout"
    ids:
      - "*"

Since posting this, I went through our VMs and updated the filebeat configs to list out all the individual container logs, but I still can't find any logs from our kafka and zookeeper containers, even though these log files exist and are being written to. In the logs there is an error:
ERROR log/harvester.go:281 Read line error: invalid CRI log format; File: /var/lib/docker/containers/[container id]-json.log

The error is listed for various container logs. I'm confused as to why this error is happening for some container logs and not others.

With your configuration only logs written to stdout will be collected, maybe the services that are not being collected are logging to stderr. Try to remove the stream: "stdout" line, or change it to stream: "all".

In the logs there is an error:
ERROR log/harvester.go:281 Read line error: invalid CRI log format; File: /var/lib/docker/containers/[container id]-json.log

If you continue finding this error take a look to this comment.

1 Like

Hi @jsoriano,

Thanks! I've updated the config to stream "all" and I'll let you know if I start seeing the missing container logs. If not, I'll try reinstalling with a clean registry.

1 Like

@jsoriano we now have an entirely different problem where elasticsearch has stopped responding altogether. The elasticsearch docker container CPU is fluctuating between 100% and 1000% and I/O is 315GB/56.5GB. We are going to try to deploy elasticsearch with more than one node. Do you have any other suggestions for fixing this problem so we can continue to work on the logging problem?

Did this problem start when you changed to stream: 'all'? Is it possible that actually the quantity of logs collected after this is so big?

If this new problem is not related to this configuration change I'd recommend you to open a new topic on the Elasticsearch category.

Hi @jsoriano I managed to get elasticsearch back up and running with aws elasticsearch. I am seeing a problem where filebeat is not reading logs from all of the machines, despite confirming that filebeat is running and set up the same on all machines. I am also not getting ssh or sudo logs from the system module. Filebeat seems to be reading all of the container logs for only one of the machines (it is set up to read from all containers on 9 machines). I'm confused as to why this would be working on only one machine despite the fact that they are set up the same and the logs don't indicate any differences.

Hi @elizajanus,

There seem to be different problems here.

If all machines have filebeat configured the same, could some of them have connectivity problems with ES? This can happen if they are on different networks, but connectivity problems should appear in logs. It'd be useful if you could share the config and the start-up logs of the filebeat that is working and the logs from a filebeat that is not working.

Regarding ssh and sudo logs, if you are running filebeat as a container, you need to mount the host logs directory in the container.

I discovered that the issue was that the filebeat system module does not work with aws elasticsearch due to the lack of support for the geoip plugin. I am still confused as to why it worked on one machine for a while, but after removing the system module I'm back to square one. although I am seeing more logs due to including stderr, I can see that events relating to our kafka and zookeeper containers are being published to elasticsearch just like the events for the other containers, but I do not see any of the events for kafka and zookeeper containers in kibana.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.