I am trying to use Filebeat to send MongoDB 4 logs to Elastic and it is failing.
I have a log messages from Mongo that grok is failing to match against the supplied fields, e.g.
2018-09-25T05:16:13.012+0000 I STORAGE [WT RecordStoreThread: local.oplog.rs] WiredTiger record store oplog truncation finished in: 1ms
This results in:
Provided Grok expressions do not match field value: [2018-09-25T05:16:13.012+0000 I STORAGE [WT RecordStoreThread: local.oplog.rs] WiredTiger record store oplog truncation finished in: 1ms]
I am using the packaged filter from
/usr/share/filebeat/module/mongodb/log/ingest/pipeline.json
containing
"grok": { "field": "message", "patterns":[ "%{TIMESTAMP_ISO8601:mongodb.log.timestamp} %{WORD:mongodb.log.severity} %{WORD:mongodb.log.component} \\s*\\[%{WORD:mongodb.log.context}\\] %{GREEDYDATA:mongodb.log.message}" ], "ignore_missing": true }
Based on the documentation (https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-mongodb.html)
mongodb.log.context
type: keyword
example: initandlisten
Context of message
So mongodb.log.context is a keyword, but the message I get from Mongo is clearly no longer a single word, e.g.
[WT RecordStoreThread: local.oplog.rs]
My question is are there any known workarounds for this issue?