How to turn fields in a text file into terms in kibana

I have a text file, formmated like JSON:

"changedFieldsList": [
{
  "fieldId": "ORDER_ID",
  "fieldType": "NUMBER",
  "fieldValue": "55841",
  "fieldChanged": "Y"
},
{
  "fieldId": "ORDER_DATE",
  "fieldType": "DATE",
  "fieldValue": "2017-11-13",
  "fieldChanged": "Y"
}...
]

Currently, kibana (after receiving the file like so: folders -> filebeat -> logstash -> elasticsearch -> kibana) takes the entire text file as one, and I cant aggregate the file by its fields, like sort by fieldID or by value.
How can I make logstash/kibana to process the data so that the keys in the JSON file are terms kibana can sort by?
How can I make the elastic stack deal with the JSON format (and again - this is not a .json file, its a text file) better?
I tried to use JSON filters in logstash, but couldn't make it, so far.
P.S: There is another thing I'm trying to do: when I discover the data in kibana, kibana shows the file path under "source". Can I get out a portion of the path and save it as a searchable term? for example:
/path/to/the/folder/FOLDERNAME/subdir/file
I want to save FOLDERNAME as a variable I can aggregate the data by.
Can anyone help?
Yishai

anyone?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.