Hi all,
I'm using Filebeat as a DaemonSet in Kubernetes to capture logs from containers across the cluster. These logs are being pushed into an Elastic Cloud deployment by Filebeat. We're primarily a Java shop, so I'm trying to use pattern matching in the filebeat.yml config file to handle multiline message (e.g. Java stack traces).
This seems like it should be pretty straightforward as it's covered in the documentation here: https://www.elastic.co/guide/en/beats/filebeat/7.4/_examples_of_multiline_configuration.html. With the below config, logs are reaching the Elasticsearch index as expected, but multiline messages that match my pattern are still being inserted into the index as different messages in separate documents, rather than being merged into a single message.
Any ideas what may be wrong? I feel like I'm missing something simple, but can't figure out what it is.
Thank you!
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: monitoring
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
host: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
multiline.negate: false
multiline.match: after
processors:
- add_host_metadata:
- add_kubernetes_metadata:
in_cluster: true
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}