Beatfile doesn'f capture Docker's logs but others do, why? [solved]

Hello all,

I am following this tutorial to set up a ELK stack (VPS B) that will receive some Docker/docker compose images logs (VPS A) using Beatfile as forwarder, my diagram is as shown below

So far, I have managed to have all the interfaces with green ticks working. However, there are still remaining some issues in that I am not able to understand. Thus, I would appreciate if someone could help me out a bit with it.

My main issue is that I don't get any Docker/docker-compose log from the VPSA into the Filebeat Server of VPSB; nevertheless, I got other logs from VPSA such as rsyslog, authentication log and so on on the Filebeat Server of VPSB. I have configured my docker-compose file to forward the logs using rsyslog as logging driver, and then filebeat is fowarding that syslog to the VPSB. Reaching this point, I do see logs from the docker daemon itself, such as virtual interfaces up/down, staring process and so, but not the "debug" logs of the containters themselves

EDIT: this part was missed in the post

The configuration of Filebeat client in VPSA looks like this

root@VPSA:/etc/filebeat# cat filebeat.yml 
filebeat:
  prospectors:
    -
      paths:
        - /var/log/auth.log
        - /var/log/syslog
#        - /var/log/*.log


      input_type: log
      
      document_type: syslog

  registry_file: /var/lib/filebeat/registry

output:
  logstash:
    hosts: ["ipVPSB:5044"]
    bulk_max_size: 2048

    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

shipper:

logging:
  files:
    rotateeverybytes: 10485760 # = 10MB
    level: debug

One of the docker-compose logging driver looks like this

redis:
    logging: 
       driver: syslog
       options:
           syslog-facility: user 

Finally I would like to ask, whether it is possible to forward natively from docker-composer the logs to Filebeat client in VPSA, red arrow in the diagram, so that it can forward them to my VPSB.

Thank you very much,

REgards!!

  • Have you confirmed that the syslog daemon writes the logs to the file as expected?
  • Is that the same file that you've configured Filebeat to read?
  • Are there any signs that Filebeat tries to read the file? What's in the Filebeat logs?

@magnusbaeck

Thanks for replying,

Yes I have, in fact, if I look at /var/log/syslog in VPSA, I can see the expected logs from the dockers containers.

Yes it is, there is no other syslog in that folder, except those called syslog.1.log...

I think there is, because the auth.log are reaching the VPSB, how can I be sure of that? I have enable the logging level of Filebeats to debug, should I look at /var/lib/filebeat/registry ?

Thanks again!

I think there is, because the auth.log are reaching the VPSB, how can I be sure of that? I have enable the logging level of Filebeats to debug, should I look at /var/lib/filebeat/registry ?

Yes, the registry could include interesting information and with debug-level logs you should definitely be able to find something about /var/log/syslog in the Filebeat logs.

Thanks @magnusbaeck,

Checking the registry didn't tell me so much, or at least not much that I can understand

"/var/lib/docker/containers/8fd7dc3825d62c534fec67a3218d681dc53f46437447f01119afd828adaa9052/8fd7dc3825d62c534fec67a3218d681dc53f46437447f01119afd828adaa9052-json.log":{"source":"/var/lib/docker/containers/8fd7dc3825d62c534fec67a3218d681dc53f46437447f01119afd828adaa9052/8fd7dc3825d62c534fec67a3218d681dc53f46437447f01119afd828adaa9052-json.log","offset":3868,"FileStateOS":{"inode":393474,"device":64769}},"/var/lib/docker/containers/9219c552f9b6d576b68ce1c629c18c02df43dac568788a162cafdb0ad2124543/9219c552f9b6d576b68ce1c629c18c02df43dac568788a162cafdb0ad2124543-json.log":{"source":"/var/lib/docker/containers/9219c552f9b6d576b68ce1c629c18c02df43dac568788a162cafdb0ad2124543/9219c552f9b6d576b68ce1c629c18c02df43dac568788a162cafdb0ad2124543-json.log","offset":3883,"FileStateOS":{"inode":393346,"device":64769}},"/var/lib/docker/containers/927f708c48166621e77f1ec8956b0278fb084201b67f46427382181aad286424/927f708c48166621e77f1ec8956b0278fb084201b67f46427382181aad286424-json.log":{"source":"/var/lib/docker/containers/927f708c48166621e77f1ec8956b0278fb084201b67f46427382181aad286424/927f708c48166621e77f1ec8956b0278fb084201b67f46427382181aad286424-json.log","offset":798,"FileStateOS":{"inode":393606,"device":64769}},"/var/lib/docker/containers/a4cfb24accadf7485015aeed432fe66fb7c853d8a81fb44325dda9ea6bde556e/a4cfb24accadf7485015aeed432fe66fb7c853d8a81fb44325dda9ea6bde556e-json.log":{"source":"/var/lib/docker/containers/a4cfb24accadf7485015aeed432fe66fb7c853d8a81fb44325dda9ea6bde556e/a4cfb24accadf7485015aeed432fe66fb7c853d8a81fb44325dda9ea6bde556e-json.log","offset":19503,"FileStateOS":{"inode":393576,"device":64769}},"/var/lib/docker/containers/d68b250754413418d2498463333bb61575e12dd368338a26297f0e4ecc54474b/d68b250754413418d2498463333bb61575e12dd368338a26297f0e4ecc54474b-json.log":{"source":"/var/lib/docker/containers/d68b250754413418d2498463333bb61575e12dd368338a26297f0e4ecc54474b/d68b250754413418d2498463333bb61575e12dd368338a26297f0e4ecc54474b-json.log","offset":46916064,"FileStateOS":{"inode":393351,"device":64769}},"/var/lib/docker/containers/dd999ed938e59515bdd8aed06fd8f22453e8f56aa7e5f2229a26dc10a451280a/dd999ed938e59515bdd8aed06fd8f22453e8f56aa7e5f2229a26dc10a451280a-json.log":{"source":"/var/lib/docker/containers/dd999ed938e59515bdd8aed06fd8f22453e8f56aa7e5f2229a26dc10a451280a/dd999ed938e59515bdd8aed06fd8f22453e8f56aa7e5f2229a26dc10a451280a-json.log","offset":36961735,"FileStateOS":{"inode":393579,"device":64769}},"/var/lib/docker/containers/e50ad5ba3efdee285aa448e921afa30364900877cca50e04462183518dae7173/e50ad5ba3efdee285aa448e921afa30364900877cca50e04462183518dae7173-json.log":{"source":"/var/lib/docker/containers/e50ad5ba3efdee285aa448e921afa30364900877cca50e04462183518dae7173/e50ad5ba3efdee285aa448e921afa30364900877cca50e04462183518dae7173-json.log","offset":2603,"FileStateOS":{"inode":393349,"device":64769}},"/var/lib/docker/containers/e6a4134424b01f084780a40869655b4590292b89c4fe8776350a3afae9d15a4b/e6a4134424b01f084780a40869655b4590292b89c4fe8776350a3afae9d15a4b-json.log":{"source":"/var/lib/docker/containers/e6a4134424b01f084780a40869655b4590292b89c4fe8776350a3afae9d15a4b/e6a4134424b01f084780a40869655b4590292b89c4fe8776350a3afae9d15a4b-json.log","offset":7775,"FileStateOS":{"inode":393586,"device":64769}},"/var/lib/docker/containers/e87bcd40f03e945d0ddbfe1409a1ae1d8d48032871709d58fb06071d8cd9e963/e87bcd40f03e945d0ddbfe1409a1ae1d8d48032871709d58fb06071d8cd9e963-json.log":{"source":"/var/lib/docker/containers/e87bcd40f03e945d0ddbfe1409a1ae1d8d48032871709d58fb06071d8cd9e963/e87bcd40f03e945d0ddbfe1409a1ae1d8d48032871709d58fb06071d8cd9e963-json.log","offset":2048,"FileStateOS":{"inode":393509,"device":64769}},"/var/log/alternatives.log":{"source":"/var/log/alternatives.log","offset":0,"FileStateOS":{"inode":1181124,"device":64769}},"/var/log/apport.log":{"source":"/var/log/apport.log","offset":0,"FileStateOS":{"inode":1181113,"device":64769}},"/var/log/auth.log":{"source":"/var/log/auth.log","offset":11509224,"FileStateOS":{"inode":1184634,"device":64769}},"/var/log/boot.log":{"source":"/var/log/boot.log","offset":5670,"FileStateOS":{"inode":1179687,"device":64769}},"/var/log/bootstrap.log":{"source":"/var/log/bootstrap.log","offset":0,"FileStateOS":{"inode":1180653,"device":64769}},"/var/log/cloud-init-output.log":{"source":"/var/log/cloud-init-output.log","offset":34850,"FileStateOS":{"inode":1179720,"device":64769}},"/var/log/cloud-init.log":{"source":"/var/log/cloud-init.log","offset":1364638,"FileStateOS":{"inode":1179713,"device":64769}},"/var/log/dpkg.log":{"source":"/var/log/dpkg.log","offset":1859,"FileStateOS":{"inode":1211553,"device":64769}},"/var/log/kern.log":{"source":"/var/log/kern.log","offset":159356,"FileStateOS":{"inode":1179812,"device":64769}},"/var/log/syslog":{"source":"/var/log/syslog","offset":477515,"FileStateOS":{"inode":1187525,"device":64769}}}

Here's the interesting entry:

	"/var/log/syslog": {
		"source": "/var/log/syslog",
		"offset": 477515,
		"FileStateOS": {
			"inode": 1187525,
			"device": 64769
		}
	}

So, Filebeat thinks it's at offset 477515 in the /var/log/syslog, at least the /var/log/syslog that has/had inode 1187525. Is that the current file? And again, check the Filebeat logs.

@magnusbaeck thanks again,

What does that exactly mean? That there is a large time gap betweent syslog stamps and filebeat?

I stopped filebeat, remove registry and restarted it again, I got this

{"/var/log/syslog":{"source":"/var/log/syslog","offset":7264,"FileStateOS":{"inode":1180214,"device":64769}}}

EDIT
AFter starting my docker-compse images, and checking filebeats registry I see this

{"/var/log/syslog":{"source":"/var/log/syslog","offset":26645,"FileStateOS":{"inode":1180214,"device":64769}}}

However I do not understand what does mean

I got the logs from filebeat at VPSA

2016-10-20T10:58:00+02:00 DBG Same file as before found. Fetch the state.
2016-10-20T10:58:00+02:00 DBG Update existing file for harvesting: /var/log/syslog
2016-10-20T10:58:00+02:00 DBG Not harvesting, file didn't change: /var/log/syslog
2016-10-20T10:58:05+02:00 DBG Flushing spooler because of timeout. Events flushed: 0
2016-10-20T10:58:05+02:00 DBG Start next scan
2016-10-20T10:58:05+02:00 DBG scan path /var/log/syslog
2016-10-20T10:58:05+02:00 DBG Check file for harvesting: /var/log/syslog
2016-10-20T10:58:05+02:00 DBG Same file as before found. Fetch the state.
2016-10-20T10:58:05+02:00 DBG Update existing file for harvesting: /var/log/syslog
2016-10-20T10:58:05+02:00 DBG Not harvesting, file didn't change: /var/log/syslog
2016-10-20T10:58:08+02:00 DBG End of file reached: /var/log/syslog; Backoff now.
2016-10-20T10:58:10+02:00 DBG Start next scan
2016-10-20T10:58:10+02:00 DBG scan path /var/log/syslog
2016-10-20T10:58:10+02:00 DBG Check file for harvesting: /var/log/syslog
2016-10-20T10:58:10+02:00 DBG Same file as before found. Fetch the state.
2016-10-20T10:58:10+02:00 DBG Update existing file for harvesting: /var/log/syslog
2016-10-20T10:58:10+02:00 DBG Not harvesting, file didn't change: /var/log/syslog
2016-10-20T10:58:13+02:00 DBG Flushing spooler because of timeout. Events flushed: 0
2016-10-20T10:58:15+02:00 DBG Start next scan
2016-10-20T10:58:15+02:00 DBG scan path /var/log/syslog
2016-10-20T10:58:15+02:00 DBG Check file for harvesting: /var/log/syslog
2016-10-20T10:58:15+02:00 DBG Same file as before found. Fetch the state.
2016-10-20T10:58:15+02:00 DBG Update existing file for harvesting: /var/log/syslog
2016-10-20T10:58:15+02:00 DBG Not harvesting, file didn't change: /var/log/syslog
2016-10-20T10:58:18+02:00 DBG full line read
2016-10-20T10:58:18+02:00 DBG full line read
2016-10-20T10:58:18+02:00 DBG full line read
2016-10-20T10:58:18+02:00 DBG full line read
2016-10-20T10:58:18+02:00 DBG End of file reached: /var/log/syslog; Backoff now.
2016-10-20T10:58:19+02:00 DBG End of file reached: /var/log/syslog; Backoff now.
2016-10-20T10:58:20+02:00 DBG Flushing spooler because of timeout. Events flushed: 4
2016-10-20T10:58:20+02:00 DBG Publish: {
"@timestamp": "2016-10-20T08:58:18.905Z",
"beat": {
"hostname": "VPSA",
"name": "VPSA"
},
"count": 1,
"fields": null,
"input_type": "log",
"message": "Oct 20 10:58:04 VPSA docker/b4ad179f0381[9331]: message repeated 2 times: [ #033[90m DEBUG#033[0m Handle PULL_DATA #033[90mGatewayID#033[0m=FFFEB827EBAA15FD #033[90madapter#033[0m=gateway-semtech]",
"offset": 78662,
"source": "/var/log/syslog",
"type": "syslog"
}
2016-10-20T10:58:20+02:00 DBG Publish: {
"@timestamp": "2016-10-20T08:58:18.906Z",
"beat": {
"hostname": "VPSA",
"name": "VPSA"
},
"count": 1,
"fields": null,
"input_type": "log",
"message": "Oct 20 10:58:11 VPSA docker/b4ad179f0381[9331]: #033[90m DEBUG#033[0m Handle PUSH_DATA #033[90mGatewayID#033[0m=FFFEB827EBAA15FD #033[90madapter#033[0m=gateway-semtech",
"offset": 78887,
"source": "/var/log/syslog",
"type": "syslog"
}
2016-10-20T10:58:20+02:00 DBG Publish: {
"@timestamp": "2016-10-20T08:58:18.906Z",
"beat": {
"hostname": "VPSA",
"name": "VPSA"
},
"count": 1,
"fields": null,
"input_type": "log",
"message": "Oct 20 10:58:11 VPSA docker/b4ad179f0381[9331]: #033[90m DEBUG#033[0m Handle stat #033[90mGatewayID#033[0m=FFFEB827EBAA15FD #033[90madapter#033[0m=gateway-semtech",
"offset": 79083,
"source": "/var/log/syslog",
"type": "syslog"
}
2016-10-20T10:58:20+02:00 DBG Publish: {
"@timestamp": "2016-10-20T08:58:18.906Z",
"beat": {
"hostname": "VPSA",
"name": "VPSA"
},
"count": 1,
"fields": null,
"input_type": "log",
"message": "Oct 20 10:58:14 VPSA docker/b4ad179f0381[9331]: #033[90m DEBUG#033[0m Handle PULL_DATA #033[90mGatewayID#033[0m=FFFEB827EBAA15FD #033[90madapter#033[0m=gateway-semtech",
"offset": 79279,
"source": "/var/log/syslog",
"type": "syslog"
}

So it is sending the events as far as I know, but what does it mean Flushing spooler because of timeout. Events flushed: 4, is it discarding 4 messages?

thanks again

I'm not sure what's going on. Hopefully one of the Beats folks can chime in.

Hi @magnusbaeck,

I think that the problem is that filebeat is flushing away all my interesting logs... maybe @ruflin can help?

I think this issue might be related to the starting of the filebeat in the client, I have hanged its initialization time in rc.d with
sudo update-rc.d filebeat defaults 95 10

I read that in this post, it does work!

2016-10-20T15:59:51+02:00 DBG  Flushing spooler because of timeout. Events flushed: 3
    2016-10-20T15:59:51+02:00 DBG  Publish: {
      "@timestamp": "2016-10-20T13:59:44.233Z",
      "beat": {
        "hostname": "VPSA",
        "name": "VPSA"
      },
      "count": 1,
      "fields": null,
      "input_type": "log",
 },
  "count": 1,
  "fields": null,
  "input_type": "log",
  "message": "Oct 20 15:59:39 artguard GW[1401]: message repeated 2 times: [ #033[90m DEBUG#033[0m Handle PULL_DATA                         #033[90mGatewayID#033[0m=#033[90madapter#033[0m=gateway-semtech]",
  "offset": 17654,
  "source": "/var/log/syslog",
  "type": "syslog"
}

filebeat uses a spooler to collect logs from potentially multiple log files into micro-batches to be published. The "flush" message is by the spooler. Once the spooler flushes it's buffers, the micro-batch will be forwarded to the output trying to send the logs. Unfortunately I'm missing the output module its log messages in your filebeat log.

What should I do to include it?

did you use -d '*' to enable all debug selectors?

Did you check filebeat being connected to logstash (e.g. netstat)?

YEs I did, the issue was as I mentioned that I had to update rc.d for filebeat

that was not possible since filebeat is a different VPSA, where I don't have logstash installed in.

but your filebeat config file states logstash host: ipVPSB:5044

Yes it does, filebeat is forwarding the logs to logstash in VPSB :wink:

so logstash on VSP B is getting the log messages from filebeat from VSP A and netstat shows you filebeat on VSP A being connected to logstash on VSP B?

Which exact command do you use to start filebeat on VSP A?

The debug logs of output should be included in filebeat log output.

Exactly!

I used sudo service filebeat start, but as I mentioned in the post above , the problem was starting filebeat in VSPA before syslog, so that it can read it and capture the logs.

filebeats scanner will check every now and then a log file being available. Or is the problem filebeat using syslog-logger for it's own logging purpose?

Is the problem resolved?