I am able to parse simple json file which is having one doc in one line
ex:
{"name":"name1",address1:"address1"}
{"name2":"name2", "address2" : "address2"}
but If I am having json format like below how can I parse that
Thanks @warkolm
But the problem is when I am trying to parse the simple json with once doc/line then its working fine..but when json file having two or more doc/line
like :
[{"name":"name1",address1:"address1"}][{"name":"name1",address1:"address1"}]
It is reading the first json and without reading the second json doc , it is directly coming to the second line..
Yes, we understand the problem. And our suggestion is that you use a grok filter to split the line in two fields that each can be parsed as valid JSON.
one query is that : I have a json file having around 1000 records, at the end it does not contain new line character , so Logstash is not reading anything till the time I am explicitly putting an "ENTER (key)" there.
how can I overcome with this problem ?
I don't think there's a way around that, at least not if you want the stateful file reading (i.e. it can be interrupted and will continue where it left off) that you get with the file input. If you can use the exec input you can just cat the file and add an extra newline.
Is there any way that , while reading the file I can put end line character from logstash only ?
Because it would be very difficult to put end line character in file manually in production environment .
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.