Hi,
I'm trying to set up autodiscovery in Kubernetes for Filebeat. There is a DaemonSet running Filebeat on each node, logging into ElasticSearch.
Hints based autodiscovery works well for HAProxy and nginx Containers/Pods, but I'm running into problems with JSON-encoded logs. Several of my containers run SpringBoot+LogBack.
The problem I have is that I can't figure out how I can configure the parsing.
This is the autodiscover part from filebeat.yml
:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
json.message_key: log
templates:
- config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
json.keys_under_root: true
json.add_error_key: true
exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
I can see the following in the message
in Kibana.
{ "@timestamp": "2019-04-05T10:18:58.057+00:00","@version": 1,"message": "Longer than usual backend-request detected. ","logger_name": "com.example.package.MyClass","thread_name": "http-nio-0.0.0.0-8080-exec-5", "level": "WARN","level_value": 30000}
But it's just a string and I would like to have it parsed and be able to filter based on the keys.
I tried to add annotations to the pods, but I wasn't sure what to put there.
When I tried the following, I got Grok error messages:
annotations:
co.elastic.logs/module: logstash
co.elastic.logs/format: json