Hi! were moving our infra to containers and we wold like to use filebeat to send the logs directly to Elasticsearch from Filebeat, instead of going trough Logstash.
In this process we might lose our logstash filtering and parsing capabilities (grok is not supported in filebeat)
We decided to go with json parse in filebeat and it worked great. now ive tried adding the multiline for java stack traces. it worked fine while using logstash. and now iv'e tried everything but the result wont change. i get a new document for every line in the stack.
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
json.add_error_key: true
json.message_key: log
json.keys_under_root: true
multiline:
pattern: (^[a-zA-Z.]+(?:Error|Exception).+)|(^\s+at .+)|(^\s+... \d+ more)|(^\t+)|(^\s*Caused by:.+)
negate: false
match: after
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
do you have any idea what is happening? i made sure that the \t is also in the pattern. should i tell multiline to look at the log field like i told the JSON parser? is there a way to do so?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.