Help needed: Filebeat Container input > Elasticsearch >Kibana

Hi all,

Docker home user here who needs some help.
Architecture:

  • Host OS: Windows 10 Pro
  • Docker for Windows latest version.
  • I use docker compose managed through dockstation.io

In the attached picture you can see what containers I'm running so that you can have an opinion about my usage.
(personal media server)

In the beginning I wanted a "task manager" web based look alike. That's how I discovered Telegraf, Influxdb and Grafana.
Then I wanted reverse proxy for my comfort.
Then I wanted https.

Now I want to establish a log analysis suite.

Using Portainer I observed that all the containers logs are in the below location:
/var/lib/docker/containers/containerID/container ID-json.log

So I managed to make this work with EFK. Filebeat collects the logs and exports them to Elasticsearch, Kibana allows me to take a look on them.

The only problem that I have is that the messages are 'too plain ?'

Here is my Filebeat.yml content:

filebeat.inputs:
- type: container
  paths: 
- '/var/lib/docker/containers/*/*.log'
  
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"

- decode_json_fields:
fields: ["message"]
target: "json"
overwrite_keys: true

output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  indices:
- index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}

logging.json: true
logging.metrics.enabled: false

Here is how my logs look like:
InfluxDB

Ombi

Tautulli and Ouroboros

As you can see the structure of the logs is not the same.

Ouroboros example of a message from Kibana:

{
  "_index": "filebeat-7.3.0-2019.08.22",
  "_type": "_doc",
  "_id": "tDOBu2wBoGS74nL1zKc8",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2019-08-22T22:45:36.101Z",
    "log": {
      "offset": 442,
      "file": {
        "path": "/var/lib/docker/containers/5305fbb22a8674f1285d853b2b742172d39a28300e424c16d8e5be761ba680f1/5305fbb22a8674f1285d853b2b742172d39a28300e424c16d8e5be761ba680f1-json.log"
      }
    },
    "stream": "stderr",
    "message": "2019-08-23 01:45:35 : INFO : dockerclient : bazarr will be updated",
    "input": {
      "type": "container"
    },
    "ecs": {
      "version": "1.0.1"
    },
    "host": {
      "name": "filebeat"
    },
    "agent": {
      "id": "c7ec1ac4-9f54-4666-8ad9-bc2da0278b35",
      "version": "7.3.0",
      "type": "filebeat",
      "ephemeral_id": "985e66a2-7a1f-4a2a-9a41-877f92b71e51",
      "hostname": "filebeat"
    },
    "container": {
      "id": "5305fbb22a8674f1285d853b2b742172d39a28300e424c16d8e5be761ba680f1",
      "image": {
        "name": "pyouroboros/ouroboros:latest"
      },
      "name": "ouroboros",
      "labels": {
        "maintainers": "dirtycajunrice,circa10a",
        "com_docker_compose_config-hash": "018ff98fa04dce7af3fba51f871df995215dacc2ea57733faf4458aa5a4774ff",
        "com_docker_compose_container-number": "1",
        "com_docker_compose_oneoff": "False",
        "com_docker_compose_project": "media-server",
        "com_docker_compose_service": "ouroboros",
        "com_docker_compose_version": "1.24.1"
      }
    }
  },
  "fields": {
    "@timestamp": [
      "2019-08-22T22:45:36.101Z"
    ],
    "suricata.eve.timestamp": [
      "2019-08-22T22:45:36.101Z"
    ]
  },
  "sort": [
    1566513936101
  ]
}

Compared with my previous trials based on Logstash, I'm quite happy with the simplicity of the Filebeat solution.

For the moment I only have message field with log messages in Kibana, I'm assuming that the message field should be split in more fields (level etc.).

What do you think, is there a way to make all this uniform ?

here's the original tutorial that I followed to make this work.

https://www.sarulabs.com/post/5/2019-08-12/sending-docker-logs-to-elasticsearch-and-kibana-with-filebeat.html

I played around with some settings.
This is my filebeat.yml now:

filebeat.inputs:
- type: container
  paths: 
    - '/var/lib/docker/containers/*/*.log'
  
processors:
- add_docker_metadata:
    host: "unix:///var/run/docker.sock"
- decode_json_fields:
    fields: "message"
    target: "json"
    max_depth: 100
    overwrite_keys: true

output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  indices:
    - index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"

logging.json: true
logging.metrics.enabled: false

And for Kibana it actually works (partially.

 @timestamp	Aug 30, 2019 @ 14:39:17.206
   	t _id	DD9S4mwBiD1FxAFTqLS-
   	t _index	filebeat-7.3.0-2019.08.30
   	# _score	 - 
   	t _type	_doc
   	t agent.ephemeral_id	aa899855-e2ce-4f6d-8c83-1a1bb941ed48
   	t agent.hostname	filebeat
   	t agent.id	c7ec1ac4-9f54-4666-8ad9-bc2da0278b35
   	t agent.type	filebeat
   	t agent.version	7.3.0
   	t container.id	cb69754a13fff51834d95ae275618e00c832141b13b4d4f1f433e74bb8fffb5c
   	t container.image.name	docker.elastic.co/kibana/kibana:7.3.0
   	t container.labels.com_docker_compose_config-hash	3a97225bf71a6f3e68f071ad8be827ed7bee1426b4eae63d7f6e7d30c16e7cc2
   	t container.labels.com_docker_compose_container-number	1
   	t container.labels.com_docker_compose_oneoff	False
   	t container.labels.com_docker_compose_project	media-server
   	t container.labels.com_docker_compose_service	kibana
   	t container.labels.com_docker_compose_version	1.24.1
   	t container.labels.license	Elastic License
   	t container.labels.org_label-schema_build-date	20190305
   	t container.labels.org_label-schema_license	GPLv2
   	t container.labels.org_label-schema_name	kibana
   	t container.labels.org_label-schema_schema-version	1.0
   	t container.labels.org_label-schema_url	https://www.elastic.co/products/kibana
   	t container.labels.org_label-schema_vcs-url	https://github.com/elastic/kibana
   	t container.labels.org_label-schema_vendor	Elastic
   	t container.labels.org_label-schema_version	7.3.0
   	t container.labels.traefik_backend	kibana
   	t container.labels.traefik_frontend_rule	Host:kibana.localhost
   	t container.name	kibana
   	t ecs.version	1.0.1
   	t host.name	filebeat
   	t input.type	container
   	t json.@timestamp	2019-08-30T11:39:16Z
   	t json.message	PUT /api/saved_objects/index-pattern/64c7acd0-c4ab-11e9-a8f8-a5993efc0d40 200 971ms - 9.0B
   	t json.method	put
   	# json.pid	1
   	t json.req.headers.accept	*/*
   	t json.req.headers.accept-encoding	gzip, deflate
   	t json.req.headers.accept-language	en-US,en;q=0.9,ro;q=0.8
   	t json.req.headers.connection	keep-alive
   	t json.req.headers.content-length	198475
   	t json.req.headers.content-type	application/json
   	t json.req.headers.dnt	1
   	t json.req.headers.host	192.168.0.55:5601
   	t json.req.headers.kbn-version	7.3.0
   	t json.req.headers.origin	http://192.168.0.55:5601
   	t json.req.headers.referer	http://192.168.0.55:5601/app/kibana
   	t json.req.headers.user-agent	Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.56 Safari/537.36
   	t json.req.method	put
   	t json.req.referer	http://192.168.0.55:5601/app/kibana
   	t json.req.remoteAddress	172.18.0.1
   	t json.req.url	/api/saved_objects/index-pattern/64c7acd0-c4ab-11e9-a8f8-a5993efc0d40
   	t json.req.userAgent	172.18.0.1
   	# json.res.contentLength	9
   	# json.res.responseTime	971
   	# json.res.statusCode	200
   	# json.statusCode	200
   	t json.tags	
   	t json.type	response
   	t log.file.path	/var/lib/docker/containers/cb69754a13fff51834d95ae275618e00c832141b13b4d4f1f433e74bb8fffb5c/cb69754a13fff51834d95ae275618e00c832141b13b4d4f1f433e74bb8fffb5c-json.log
   	# log.offset	6,710,722
   	t message	{"type":"response","@timestamp":"2019-08-30T11:39:16Z","tags":[],"pid":1,"method":"put","statusCode":200,"req":{"url":"/api/saved_objects/index-pattern/64c7acd0-c4ab-11e9-a8f8-a5993efc0d40","method":"put","headers":{"host":"192.168.0.55:5601","connection":"keep-alive","content-length":"198475","dnt":"1","kbn-version":"7.3.0","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.56 Safari/537.36","content-type":"application/json","accept":"*/*","origin":"http://192.168.0.55:5601","referer":"http://192.168.0.55:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.9,ro;q=0.8"},"remoteAddress":"172.18.0.1","userAgent":"172.18.0.1","referer":"http://192.168.0.55:5601/app/kibana"},"res":{"statusCode":200,"responseTime":971,"contentLength":9},"message":"PUT /api/saved_objects/index-pattern/64c7acd0-c4ab-11e9-a8f8-a5993efc0d40 200 971ms - 9.0B"}
   	t stream	stdout
   	 suricata.eve.timestamp	Aug 30, 2019 @ 14:39:17.206

But the json fields are missing for the rest of my containers. Example:

 @timestamp	Aug 30, 2019 @ 14:39:20.005
   	t _id	Dj9S4mwBiD1FxAFTtbTe
   	t _index	filebeat-7.3.0-2019.08.30
   	# _score	 - 
   	t _type	_doc
   	t agent.ephemeral_id	aa899855-e2ce-4f6d-8c83-1a1bb941ed48
   	t agent.hostname	filebeat
   	t agent.id	c7ec1ac4-9f54-4666-8ad9-bc2da0278b35
   	t agent.type	filebeat
   	t agent.version	7.3.0
   	t container.id	5664b4a79cb1b5a2dd4729daf0f5ef7399f03803a975718c6452ccccf6695b87
   	t container.image.name	kapacitor:latest
   	t container.labels.com_docker_compose_config-hash	fb9ec04b91d9b41b59394211c6d8263f56049a70a694900788c80903f7ddbb17
   	t container.labels.com_docker_compose_container-number	1
   	t container.labels.com_docker_compose_oneoff	False
   	t container.labels.com_docker_compose_project	media-server
   	t container.labels.com_docker_compose_service	kapacitor
   	t container.labels.com_docker_compose_version	1.24.1
   	t container.name	kapacitor
   	t ecs.version	1.0.1
   	t host.name	filebeat
   	t input.type	container
   	t log.file.path	/var/lib/docker/containers/5664b4a79cb1b5a2dd4729daf0f5ef7399f03803a975718c6452ccccf6695b87/5664b4a79cb1b5a2dd4729daf0f5ef7399f03803a975718c6452ccccf6695b87-json.log
   	# log.offset	60,859,748
   	t message	ts=2019-08-30T11:39:20.005Z lvl=info msg="http request" service=http host=172.18.0.2 username=- start=2019-08-30T11:39:20.0043163Z method=POST uri=/write?consistency=&db=_internal&precision=ns&rp=monitor protocol=HTTP/1.1 status=204 referer=- user-agent=InfluxDBClient request-id=ce2b4ca6-cb1a-11e9-80e4-000000000000 duration=849.2µs
   	t stream	stderr
   	 suricata.eve.timestamp	Aug 30, 2019 @ 14:39:20.005

I feel that I'm very close but can't find the last bit of information to make this work entirely.

Still googling around I found out about json check (I used this one). Kibana'message passes the test, but another container's log (anything beside EFK stack) doesn't.

Example:

Message copied from Kibana:

[httpd] 172.18.0.14 - pihole [30/Aug/2019:22:44:51 +0300] "POST /write?db=pihole HTTP/1.1" 204 0 "-" "python-requests/2.22.0" a1c0058f-cb5e-11e9-981e-0242ac120005 66604

What the json check say:

* **Error:** [Strings should be wrapped in double quotes.](javascript:;) *[Code 17, Structure 2]*
* **Error:** [Strings should be wrapped in double quotes.](javascript:;) *[Code 17, Structure 5]*
* **Error:** [Multiple JSON root elements](javascript:;) *[Code 22, Structure 6]*
* **Error:** [Expecting comma or ], not string.](javascript:;) *[Code 12, Structure 8]*
* **Error:** [Strings should be wrapped in double quotes.](javascript:;) *[Code 17, Structure 8]*
* **Error:** [Expecting comma or ], not colon.](javascript:;) *[Code 10, Structure 9]*
* **Error:** [Expecting comma or ], not colon.](javascript:;) *[Code 12, Structure 11]*
* **Error:** [Expecting comma or ], not colon.](javascript:;) *[Code 12, Structure 13]*
* **Error:** [Expecting comma or ], not string.](javascript:;) *[Code 12, Structure 15]*
* **Error:** [Strings should be wrapped in double quotes.](javascript:;) *[Code 17, Structure 15]*
* **Error:** [Strings should be wrapped in double quotes.](javascript:;) *[Code 17, Structure 22]*

Latest filebeat.yml settings:

filebeat.inputs:
- type: container
  format: auto
  paths: 
    - '/var/lib/docker/containers/*/*.log'

processors:
- add_docker_metadata:
    host: "unix:///var/run/docker.sock"
- decode_json_fields:
    fields: "message"
    target: "json"
    overwrite_keys: true
    process_array: true
    max_depth: 100

output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  indices:
    - index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"

logging.json: true
logging.metrics.enabled: false

Hi @Iosif_Zamfirescu and welcome :slight_smile:

Nice deployment the one you have there :slight_smile:

I don't have a complete solution for your logs, but I can recommend you some options that you can explore.

First of all, one of the problems that you seem to have is that you are applying the same processors to all logs, even if they are not all the same. There are some options to apply different configuration depending on the pod:

  • Processors support conditions, so they are only applied to specific events, this would allow you for example to use the decode_json_fields processor for JSON logs, and dissect others, e.g:
processors:
- add_docker_metadata:
    host: "unix:///var/run/docker.sock"
- decode_json_fields:
    when.or:
        - container.name.contains: kibana
        - container.name.contains: elasticsearch
    ...
- dissect:
    when.container.name.contains: influxdb
    ...
  • Filebeat includes an autodiscover provider for Docker. Autodiscover allows to autodetect running services and provide configurations for them depending on conditions. You can find some examples in the documentation. An interesting related feature is hints-based autodiscover, that allows to parametrize the configuration using container labels.

Some other ideas that could be interesting in your case:

  • Filebeat offers modules to process logs of known services. There is for example one for Kibana logs.
  • Elasticsearch nodes can act as Ingest nodes, that are able to process events when received. For that they use ingest pipelines that you can define and offer more processors than filebeat. Filebeat modules use ingest pipelines to process logs, you can take a look to their code for examples, like here for the one for Apache access logs.

I hope these links help you.

And if you like Filebeat, consider trying other Beats to take more advantage of the Elastic Stack :wink: They all follow the Elastic Common Schema, what allows you to easily corelate data from different sources.

Hello and thank you for highlighting all the resources. I'll re-ckeck them again one by one.
Can you please tell me if I understood you correctly?
Although I never touched/modified the default docker logging settings, docker's default json logs are not exactly up to the json standard. Right?
So I have to use json option for EFK stack and a different option for the rest of the containers?
To be perfectly blunt, I chose EFK because of the fact that I tried to avoid customizing per each container.
I intentionally picked influxdb log from all the logs thinking that it should have the proper syntax considering that it's an official image from a big company (vs other containers from my media server).

Maybe I should go with this towards docker community? Or is more like container related than docker related?

I would be curious how other people solved this problem as I find it hard to believe that I'm the guy with the most exotic/rare setup.

Thanks.

Docker logs are valid standard JSON.

The container input unwraps this JSON and puts the message in the message field, these are the logs you see in Kibana, not the original JSON logs.

The unwrapped message can be also a JSON document, as is the case of the Kibana logs in your screenshot. Or not, as with the InfluxDB logs:

As they have different formats, to analize them different configurations are needed.

I think that when using docker the most convenient way to collect logs is using autodiscover, along with modules when possible.

You said " The container input unwraps this JSON and puts the message in the message field, these are the logs you see in Kibana, not the original JSON logs."

Would it be possible to make all logs same as kibana? Maybe I should try without decode.json?

L.E. Autodiscovery got too complicated for me. Would you or anyone be so kind and help me out with a config file ?

MAny thanks.

The log format ultimately depends on the applications, and every application has its own format options.

You can probably find examples and troubleshooting in other topics here in discuss, but let me provide you a couple of starting examples. (Please take into account that I haven't tested these examples).

To use hints-based autodiscover you would need a configuration like this one:

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

output.elasticsearch:
  hosts: ["elasticsearch:9200"]

logging.json: true
logging.metrics.enabled: false

This collects by default the logs of any container running in the host, and adds docker metadata, but it doesn't do any special processing.
To do specific processing, you need to add specific hints as described here, using docker labels. For example you can add this label to the kibana container so filebeat uses the kibana module to handle its logs:

co.elastic.logs/module=kibana

Hints-based autodiscover is quite useful if you want to decouple the configuration of filebeat from the logging settings of specific containers, as it allows to add configuration without modifying the filebeat configuration, and it collects logs from all containers by default.

But if you want to define everything in the filebeat configuration, without hints, you can start with a configuration like this one:

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.image: kibana
          config:
            - module: kibana
        - condition:
            contains:
              docker.container.image: influxdb
           config:   
             - type: container
               paths:
                 - "/var/lib/docker/containers/${data.container.id}/*-json.log"

output.elasticsearch:
  hosts: ["elasticsearch:9200"]

logging.json: true
logging.metrics.enabled: false

As you can see each template is composed of a condition, and a configuration block. the configuration block can be a module or an input definition, each one including their own options, processors and pipelines.
You would need to add templates with conditions matching all the containers whose logs you want to collect. This allow more fine-grained configuration, but for cases with many configurations I find it more cumbersome than hints-based autodiscover.

many thanks for this additional piece of info. To be honest this is exactly what I was trying to avoid, setting things up for each container. Trying to make this work now.

Stuck again. Filebeat.yml content:

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.image: kibana
          config:
            - module: kibana
        - condition:
            contains:
              docker.container.image: influxdb-telegraf
          config:   
            - type: container
            fields_under_root: true
            fields: "ts", "lvl", "msg"
              paths:
                - "/var/lib/docker/containers/${data.container.id}/*-json.log"

output.elasticsearch:
  hosts: ["elasticsearch:9200"]

logging.json: true
logging.metrics.enabled: false

Last received influxdb log message:

ts=2019-09-02T08:18:31.364142Z lvl=info msg="Snapshot for path written" log_id=0Hclu0ol000 engine=tsm1 trace_id=0HdGht0l000 op_name=tsm1_cache_snapshot path=/root/.influxdb/data/telegraf/autogen/32 duration=320.173ms

I think that once I'm ok with one app (once I get the hang of it) I can can make them all work in the same way. Pfu, I have like 20 elastic tabs in my chrome again :frowning:

Later Edit: Trying my luck with Dissect:

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.image: kibana
          config:
            - module: kibana
              log:
                input:
                  type: docker
                  containers.ids:
                    - "${data.docker.container.id}"
        - condition:
            contains:
              docker.container.image: influxdb
          config:   
            - type: container
              paths:
                - "/var/lib/docker/containers/${data.container.id}/*-json.log"
              dissect:
                field: "message"
                target_prefix: "dissect"
                tokenizer: "%{ts} %{lvl} %{msg}"
processors:
  - add_docker_metadata:
      host: "unix:///var/run/docker.sock"

The data from these two containers is available in Kibana. Kibana looks great, InfluxDB not so great:

"message": "ts=2019-09-02T12:47:10.186970Z lvl=info msg=\"Executing query\" log_id=0HdVWcuG000 service=query query=\"SHOW SUBSCRIPTIONS\"",

Managed to set influxdb logging format to json. Now the message passes the json check but I'm still not able to parse it in Kibana:
{"lvl":"info","ts":"2019-09-03T08:13:30.495527Z","msg":"Executing query","log_id":"0HeYD6z0000","service":"query","query":"SHOW SUBSCRIPTIONS"}

Latest filebeat.yml content:

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.image: kibana
          config:
            - module: kibana
              log:
                input:
                  type: docker
                  containers.ids:
                    - "${data.docker.container.id}"
        - condition:
            contains:
              docker.container.image: influxdb
          config:   
            - type: container
              paths:
                - "/var/lib/docker/containers/${data.container.id}/*-json.log"
            - decode_json_fields:
                fields: "message"
                target: ""
processors:
  - add_docker_metadata:
      host: "unix:///var/run/docker.sock"
output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  indices:
    - index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"

logging.json: true
logging.metrics.enabled: false

If I'm not able to parse a json log how could I start parse a non json log :smile: ?

One hour later, I managed to solve the influxdb logs:
json from kibana:
{
"_index": "filebeat-7.3.1-2019.09.03",
"_type": "_doc",
"_id": "PARy9mwBZBvBsK6XTApT",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2019-09-03T09:26:20.052Z",
"stream": "stderr",
"json": {
"service": "query",
"query": "CREATE DATABASE telegraf",
"message": "",
"lvl": "info",
"ts": "2019-09-03T09:26:20.047520Z",
"msg": "Executing query",
"log_id": "0HebzCz0000"
},
"input": {
"type": "container"
},
"docker": {
"container": {
"labels": {
"com_docker_compose_container-number": "1",
"com_docker_compose_oneoff": "False",
"com_docker_compose_project": "media-server",
"com_docker_compose_service": "influxdb-telegraf",
"com_docker_compose_version": "1.24.1",
"traefik_backend": "influxdb-telegraf",
"traefik_frontend_rule": "Host:influxdb-telegraf.localhost",
"com_docker_compose_config-hash": "b31805ca54b49474b6498f31157c1e6d490adf90df8742a6c364a10e121d4bf4"
}
}
},
"ecs": {
"version": "1.0.1"
},
"agent": {
"hostname": "filebeat",
"id": "bc0a02c8-ec56-40b8-a640-12516809548a",
"version": "7.3.1",
"type": "filebeat",
"ephemeral_id": "09dee9e2-a5cf-44f2-9055-b985de09022c"
},
"log": {
"offset": 5097,
"file": {
"path": "/var/lib/docker/containers/030ce67bdd2995ee1f8cabb59ba442594389c675131bcdf59010a5fbf0f141bb/030ce67bdd2995ee1f8cabb59ba442594389c675131bcdf59010a5fbf0f141bb-json.log"
}
},
"container": {
"id": "030ce67bdd2995ee1f8cabb59ba442594389c675131bcdf59010a5fbf0f141bb",
"labels": {
"com_docker_compose_oneoff": "False",
"com_docker_compose_project": "media-server",
"com_docker_compose_service": "influxdb-telegraf",
"com_docker_compose_version": "1.24.1",
"traefik_backend": "influxdb-telegraf",
"traefik_frontend_rule": "Host:influxdb-telegraf.localhost",
"com_docker_compose_config-hash": "b31805ca54b49474b6498f31157c1e6d490adf90df8742a6c364a10e121d4bf4",
"com_docker_compose_container-number": "1"
},
"name": "influxdb-telegraf",
"image": {
"name": "influxdb:latest"
}
},
"host": {
"name": "filebeat"
}
},
"fields": {
"@timestamp": [
"2019-09-03T09:26:20.052Z"
],
"suricata.eve.timestamp": [
"2019-09-03T09:26:20.052Z"
]
},
"sort": [
1567502780052
]
}
my filebeat.yml:

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            contains:
              docker.container.image: kibana
          config:
            - module: kibana
              log:
                input:
                  type: docker
                  containers.ids:
                    - "${data.docker.container.id}"
        - condition:
            contains:
              docker.container.image: influxdb
          config:   
            - type: container
              paths:
                - "/var/lib/docker/containers/${data.container.id}/*-json.log"
              json.message_key: message
              json.overwrite_keys: true
            - decode_json_fields:
                fields: "message"
                target: ""
        - condition:
            contains:
              docker.container.image: ouroboros
          config:   
            - type: container
              paths:
                - "/var/lib/docker/containers/${data.container.id}/*-json.log"
              json.message_key: message
              json.keys_under_root: true
processors:
  - add_docker_metadata:
      host: "unix:///var/run/docker.sock"
output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  indices:
    - index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"

logging.json: true
logging.metrics.enabled: false

I just have to find a way to parse non json logs and I should be ok with all of them :slight_smile:

Can soomeone help me out with dissect processor syntax ? Can't seem to be able to make this work :frowning:

Relevant filebeat conf file:

  • condition:
    contains:
    docker.container.image: bazarr
    config:
    • type: log
      paths:
      • "/logs/bazarr/config/log/bazarr.log*"
    • drop_fields:
      fields: ["log.level","msg"]
    • dissect:
      tokenizer: "%{} %{}|%{log.level->}|%{->}|%{msg}|"
      field: "message"
      target_prefix: "dissect"

message in question:

09/09/2019 13:26:07|INFO |root |BAZARR is started and waiting for request on http://0.***.***.***:6767/|

Any ideas ? I'm trying to make everything work without Logstash. Defining a block of condition and configuration for each container sounds ok for me, but I need some help making that possible.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.