Filebeat miss metadata of docker container when container stops

I am trying filebeat 6.0 beta container to collect container's stdout/stderr log. Every thing works as expected but a small problem.
First, let me make a brief description about my deployment.
I deploy my spring boot app using docker swarm mode built-in docker-ce 17.06.

docker service create --name smallredspot \ 
  --replicas 5 
  --network dev
  --detach=false
  myregistry.sample.com:5000/smallredspot:latest

The smallredspot is a simple java app and there was 5 replicas in the cluster of 3 swarm nodes. Then I run the filebeat service to collect multiline java logs printed to stdout.

docker service create --name filebeat \
  --mode global \
  --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
  --mount type=bind,src=/var/lib/docker/containers/,dst=/var/lib/docker/containers/ \
  --user root \
  --config src=filebeat,target=/usr/share/filebeat/filebeat.yml
docker.elastic.co/beats/filebeat:6.0.0-beta1

Here is filebeat.yml

filebeat.prospectors:
- type: log
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  multiline.pattern: '^[[:space:]]+|^Caused by:'
  multiline.negate: false
  multiline.match: after
  tail_files: true
  processors:
  - add_docker_metadata: ~
output.logstash:
  hosts: ["192.168.1.120:5044"]

And logstash.conf

input {
  beats {
    port => 5044
    type => "docker_logs"
  }
}
output {
  if [type] == "docker_logs" {
    elasticsearch {
      hosts => "http://192.168.1.120:9200"
      index => "%{[docker][container][labels][com][docker][swarm][service][name]}-%{+YYYY.MM}"
    }
}

In order to merge a service's log to single view, I use a metadata of these containers which is generated by swarm automatically(e.g. "com.docker.swarm.service.name"). It works perfectly till now. After updating my app service by docker service update --force smallredspot, I find a strange index is created on elasticsearch:
QQ图片20170823172157
And the message is:

{
    "_index": "%{[docker][container][labels][com][docker][swarm][service][name]}-2017.08",
    "_type": "docker_logs",
    "_id": "AV4N5o1hw7w152Pm1x8z",
    "_version": 1,
    "_score": 1,
    "_source": {
        "@timestamp": "2017-08-23T07:03:56.459Z",
        "docker": {
            "container": {
                "id": "43c3759ce800b61f5e1b1b7ff3b4431e7dc1d2cd3c70ff62a5b1e94c07420629"
            }
        },
        "source": "/var/lib/docker/containers/43c3759ce800b61f5e1b1b7ff3b4431e7dc1d2cd3c70ff62a5b1e94c07420629/43c3759ce800b61f5e1b1b7ff3b4431e7dc1d2cd3c70ff62a5b1e94c07420629-json.log",
        "offset": 8570,
        "stream": "stdout",
        "time": "2017-08-23T07:03:49.054752452Z",
        "log": "2017-08-23 15:03:49.054 INFO 1 --- [ Thread-2] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown",
        "beat": {
            "name": "60e1969b6f44",
            "hostname": "60e1969b6f44",
            "version": "6.0.0-beta1"
        },
        "prospector": {
            "type": "log"
        },
        "@version": "1",
        "host": "60e1969b6f44",
        "type": "docker_logs",
        "tags": [
            "beats_input_raw_event"
        ]
    }
}

It seems the last log message line when container is shuting down. And the container metadata is missing. Due to this the log goes into wrong place and will not be shown in smallredspot-* index. However it not so serious to me, it is really a problem under some strict situation.

Uhm,

This may be happening when we read the log line after the container has stopped. I understand messages on the new container have the right metadata while it's running?

If that's the case, could you open a bug report for it? I have some ideas on how to fix it, we just need to keep the metadata for that container available for a while after it dies.

@exekias
I have opened a issue https://github.com/elastic/beats/issues/4986
Yes, metadata is automatically added to the container while creating swarm service. Automatically added container labels include:

"Labels": {
                "com.docker.swarm.node.id": "3y3osp51qibkdu6gs27ululwt",
                "com.docker.swarm.service.id": "sumapazulprf5mmutxnunt06j",
                "com.docker.swarm.service.name": "smallredspot",
                "com.docker.swarm.task": "",
                "com.docker.swarm.task.id": "1ezph3u77g94lpge9kbl9xgh9",
                "com.docker.swarm.task.name": "smallredspot.1.1ezph3u77g94lpge9kbl9xgh9"
            }

A suggestion for the new filebeat.

I don't find any way to filter out which container's log should be collapsed. exclude_files configuration is unusable because it is associated with container id which would be changed several times per day. So shall we add a new config such as include_container: [Reg] and exclude_container: [Reg]? Without this function, filebeat will transport all containers' log including unplanned ones. This would have some impact to the host's network performance.

By the way, great thanks for your nice job! I have been looking for a way to collect container's multiline logs(java stack trace generally) for quite a long time. And this is the only workaround I found till now! As docker group refuse to add multiline logs support to gelf log driver, filebeat seems the most direct and simplest solution working on this.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.