Docker container custom logs - how to process them into kibana

I have many containers running on server. One of them is nginx - I enabled filebeat module nginx, I have also created /etc/logstash/conf.d/nginx.conf -taken from website https://www.elastic.co/guide/en/logstash/current/logstash-config-for-filebeat-modules.html.

And logs from my nginx are shown in kibana just fine. But this is only 1 container out of 10.

And from other containers logs are not shown in kibana. But what modules should I enable for them if those are just custom applications running as containers - hence producing random logs?

Please help me here, please advise which configs should I change for filebeat/logstash/elasticsearch.

How I user ELK:
one server running containers - running filebeat on the server
second server - running logstash, elasticsearch, kibana

Example of custom logs generated by other containers:
[pid: 23072|app: 0|req: 11278/21512] 127.0.0.1 () {32 vars in 367 bytes} [Sat Nov 17 18:59:09 2018] GET /ht/ => generated 261 bytes in 95 msecs (HTTP/1.1 200) 6 headers in 254 bytes (1 switches on core 0)
[pid: 23072|app: 0|req: 11279/21513] 127.0.0.1 () {32 vars in 367 bytes} [Sat Nov 17 19:01:09 2018] GET /ht/ => generated 261 bytes in 516 msecs (HTTP/1.1 200) 6 headers in 254 bytes (1 switches on core 0)
[pid: 23072|app: 0|req: 11280/21514] 127.0.0.1 () {32 vars in 367 bytes} [Sat Nov 17 19:03:09 2018] GET /ht/ => generated 261 bytes in 517 msecs (HTTP/1.1 200) 6 headers in 254 bytes (1 switches on core 0)

my filebeat.yml inputs looks like this

#=========================== Filebeat inputs =============================

filebeat.inputs:

Each - is an input. Most options can be set at the input level, so

you can use different inputs for various configurations.

Below are the input specific configurations.

  • type: docker
    combine_partial: true
    containers:
    path: "/var/lib/docker/containers/"
    stream: "stdout"
    ids:
    - "*"
    tags: ["json"]

    Change to true to enable this input configuration.

    enabled: true

Please help me

Thats exaclty how I setup my filebeat. But the logs for conainters (except container where nginx is running) are not sent to logstash. And even if they were, do I need to setup something specific in logstash or eleasticsearch to process it and send it to kibana? Nginx logs from container are send to kibana, only logs from other containers are not. Thank you

Yes. If your architecture looks like:

Filebeat -> Logstash -> Elasticsearch

Then you will need to write a Logstash pipeline that includes the Beats input and Elasticsearch output in order for events to flow through Logstash.

Alternatively, if you don't need to do any complex processing in Logstash, you could use this architecture instead:

Filebeat -> Elasticsearch

since Beats can talk directly to Elasticsearch.

Thank you very much for your answer, I am considering to switch to the second model which you mentioned:

Filebeat -> Elasticsearch

If I switched to that model, should I do some config changes / pipelines changes so that data are processed in elasticsearch?

As I posted in my first post here, I believe the logs which i need to process are quite simple.

It looks like logs are not even send to logstash, I found this errors for many containers

2018-11-23T15:47:27.742+0100 INFO log/harvester.go:251 Harvester started for file: /var/lib/docker/containers/73c7e9a159df3cfd60fc844ee0cc2e360a0d4018eb3e2b768553d453090aa357/73c7e9a159df3cfd60fc844ee0cc2e360a0d4018eb3e2b768553d453090aa357-json.log
2018-11-23T15:47:27.742+0100 ERROR log/harvester.go:278 Read line error: invalid CRI log format; File: /var/lib/docker/containers/73c7e9a159df3cfd60fc844ee0cc2e360a0d4018eb3e2b768553d453090aa357/73c7e9a159df3cfd60fc844ee0cc2e360a0d4018eb3e2b768553d453090aa357-json.log

What do you think could be the cause? Could it be that I dont have configured pipelines in logstash?

I stopped filebeat, removed /var/lib/filebeat/registry and then started filebeat. But I still get this error. Any help with that??

I have noticted that the logs which are not send via filebeat start with "[". Do you think it could cause the errors?

it looks like the issue with CRI has been resolved by adding following lines to /etc/filebeat/filebeat.yml

cri.parse_flags: true
combine_partial: true
close_inactive: 48h

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.