Filebeat reading json file

Hi Guys I am a total noob to ElasticStack and Filebeats.

I have a custom log file in a JSON format, the app we are using will output an 1 entry per file as follows


I am trying to ingest this into ElasticSearch. I believe this is possible as I am using ES5.X

I have configured my FileBeat prospector, I have attempted to at least pull out 1 field from the file for now, namely the Cuid

  • input_type: log
    json.keys_under_root : true
    • C:\Files\output*-Account-*
      tags : ["json"]


The Logstash hosts

hosts: [""] "filebeat"
template.path: "filebeat.template.json"
template.overwrite: true


  • decode_json_fields:
    fields: ["cuid"]

When I start the FileBeat , it seems to harvest the files, As I get an entry in the FileBeat Registry files
2017-03-20T13:21:08Z INFO Harvester started for file: C:\Files\output\001-Account-20032017105923.json
2017-03-20T13:21:27Z INFO Non-zero metrics in the last 30s: filebeat.harvester.closed=160 filebeat.harvester.started=160 registrar.states.update=320 registrar.writes=2

However, I can't seem to find the data within Kibana. I am not entirely sure how to find it?

I have ensured the FileBeat templates are loaded in kibana.

I have tried to read the documentation and I think I understand it correctly but I am still very hazy, as I am totally new to the stack.

The log entries indeed suggest that Filebeat has indexed some documents into ES. Is any filebeat-<date> index created? You can check by opening the Console in Kibana and then run GET _aliases to list the existing indices.

Thanks for the response.

Unfortunately I blew the the whole environment away, so would be able t check now.

What I ended up doing was rewriting the demo application, to create rollin log files as opposed to a single log file for each event. Part of the reason for this was, I was unable to .access the content of the file with Logstash, so therefore was unable to carry out the required transformation I needed to do on the data prior to sending them to elasticsearch.

I have to admit that this may be due to my understanding of Logstash processing. However, I was better able to transform the files and access data once they were rolling lig files, Beats had no issues forwarding the files to logstash.

This issue was most probably just down to me totally understanding the pipeline processing at the time. I think I have a better understanding of how to do do it now. After learning 1001 way of how not to do it :slight_smile:

Hi Guys,
I am experiencing the same issue as Gary. I am using filebeat 5.1.1 for windows. My log files look more or less the same as Gary's. Adding a '\n' at the end of the line (in case there is none) solves the problem in my case.

I guess that looking for a linebreak in logfiles makes sense when the agent monitors the file for new input, as a missing linebreak may indicate that the line isn't completly written, yet. However, I believe that an option to disable this behaviour would be nice, espacially when using the close_eof option.

Could you open an enhancement request in the beats repo for different line endings or to close a file when eof is reached? This one is tricky as this could lead to quite a few incomplete lines I would guess.

Thank you for your feedback. I just opened an enhancement request.

Thanks guys! @ms42Q I will actually give that a try, see if that works.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.