Filebeat shipped json file : how to get needed fields?

i am running some program that generate output in json format. i want to push that elasticsearch
and get analysed by kibana.
i want to make graph in kibana of tests Vs time and total fail and success.
my json file is like :

{
"success": 70,
"tests": 100,
"test_names": {
"testname1_testid1": {
"name": "testname1",
"status": "success",
"time": "1.29552"
},
"testname2_testid2": {
"name": "testname2",
"status": "success",
"time": "1.29552"
},
"totaltime" : "156.40820"
}

currently i am shipping complete json file using filebeat to ELK .
can you please suggest me how to grok and filter that fields and make that field searchable in kibana. i am new to this so for now i am using default setting of filebeat and ELK.

For JSON logfiles you have two options: Either use Filebeat 5 to have it parse the JSON for you and ship the correct fields to Logstash (or directly to Elasticsearch) out of the box, or just ship the JSON strings and let Logstash do the parsing.

Do the JSON objects in the file span over multiple lines, i.e. exactly like your example above? I don't believe that's supported by the JSON parsing feature in the upcoming Filebeat 5 (and prior versions of Filebeat have no support for JSON parsing at all).

You might be able to use Filebeat's multiline feature to join the lines of a JSON object into the same event, but if you can avoid having to do that it'd be great.

Once you're able to ship the complete JSON strings to Logstash, use its json codec to deserialize the string into fields.