Hi,
please help, spent more one week and cannot get correct parse settings.
I have file from AWS Athena query, csv, but coverted to pure multiline json. Structure:
[{
"useridentity":"{type=somevalue={attributes={mfaauthenticated=false, creationdate=2022-05-31T08:04:10Z}, sessionissuer={type=Role, principalid=value, accountid=value, username=value}}}",
"eventtime":"time",
"eventsource":"*.amazonaws.com",
"eventname":"somename",
"sourceipaddress":"ip",
"useragent":"agent",
"errorcode":"",
"errormessage":"",
"requestparameters":"{\"key\":\"value\",\"key\":{\"value\":[{\"value"}]}},{\"somekey\":\"value\",\"somekey\":{\"values\":\"\",\"type\":\"\",\"tTL\":300,\"\":[{\"value\":\"\"}]}}]}}"
},
{next json with same structure}
]
I just want to pass this json as is to Elasticsearch. With keys, useridentity, eventtime, eventsource,
eventname etc.
I have tried multiline input,
filebeat.inputs:
- type: log
enabled: true
paths:
- "path/*.json"
json.keys_under_root: true
multiline.pattern: '^{'
multiline.negate: true
multiline.match: after
json.message_key: eventame
json.overwrite_keys: true
json.add_error_key: true
and no luck, filebeat just put my json to message field as it.
adding json_decode fields processor just hang filebeat, and it is not processing files
Filebeat version - 7.10
Output configured to file, for now, try to test.
Please, help!