Parse json with filebeat

please assist with this config using Filebeat -> Elasticsearch , My Json is not being decomposed.

other fields are populating correctly, i want the message to be decomposed as well

filebeat.prospectors: 
  - 
    input_type: log
    json.add_error_key: true
    json.keys_under_root: true
    paths: 
      - message.log
output.elasticsearch: 
  hosts: 
    - "http://localhost:9200"

output

You have some sample input and sample output (expected + actual)?

Filebeat currently expects one json document per line. Multiline json is not really supported by the reader.

it is populating some fields correctly but not all
i want the message part to be broken into individual fields for eg
OUT: xxxxx
count:yyyyy

This is my output - i want message to be decomposed into smaller parts

{
"_index": "filebeat-2017.10.26",
"_type": "doc",
"_id": "ABCD",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2017-10-26T10:21:26.748Z",
"@version": 1,
"HOSTNAME": ".com",
"IP": "xxx.xxx.xxx.xxx",
"beat": {
"hostname": "xxxx",
"name": "xxxx",
"version": "5.6.3"
},
"input_type": "log",
"level": "DEBUG",
"level_value": 10000,
"logger_name": "ACTIVITY",
"message": "IN:xx.xxx.xxx.xxxx\t\t ABCD-******123\t OED-OP1000\t DEX-Gax\t DM-An\t DE-\t DN-AD\t DV-null\t AV-BLE\t Cdfs-S Data: {mod=Gax\tIP=xxx.xxxx.xxxx..xx\dfofofo=080807\tsLdse=Y\tDI=Y\DN=1\tDD=123\rdoe=000\tMDD=opSS\tdop=132231\tSK_ID=231\tFGFG=BP\tRDFF=ADSD\tserV=8.0\tTS=poi\tDFOP=dfp\tData=S}",
"offset": 564623,
"sessionId": "xxxxxx",
"source": "my.log",
"thread_name": "field : 0",
"type": "log"
},
"fields": {
"@timestamp": [
1509013286748
]
},
"sort": [
1509013286748
]
}Preformatted text

Any Ideas what i am not doing right?

The log is obviously already parsed into a json document. The message field is from the original document. The contents of message is no valid JSON. You will need logstash or ES ingest node for additional processing of the message field.

Thank you

do you an example of this kind of pattern?
I tried using https://grokdebug.herokuapp.com to build a pattern, please advice

Format is quite funny. Plus I can't tell (by one message) if order of fields is always the same or Data is always at the end. But it looks like grok is not enough here. If Data is always at the end you can try to 'split' the document using grok before and after Data. Everything before the document kind of looks like CSV-parseable (look for CSV or kv filter in logstash). The contents in Data I can't really tell about the format due to special characters like \d or \r. Maybe you want to replace those with \t as well before applying CSV or kv filter in logstash. Good luck

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.