Interesting, by the looks of your output here, it appears whatever you have sending or ingesting is already creating the fields. This means you may not need to parse the Json object to work with it.
If logstash was going to have to parse the object separately, I would have expected to see the entire josn object in the "message" section of source. Here is what I would have expected to see if your syslog server was sending a raw Json object to logstash:
{
"_index": "testing-2016.12.19",
"_type": "logs",
"_id": "AVkVmmhWcV8jDiSX-JrU",
"_score": null,
"_source": {
"message": "{ \"TAGS\": \".source.syslog_tcp\", \"SOURCEIP\": \"---\", \"PROGRAM\": \"369\", \"PRIORITY\": \"notice\", \"MESSAGE\": \"<14>1 2016-12-17T17:12:29+01:00 TirougaII WinFileService - - [synolog@6574 synotype=\\\"WinFileService\\\" ip=\\\"---\\\" luser=\\\"pc\\\" event=\\\"read\\\" isdir=\\\"File\\\" fsize=\\\"6.00 KB\\\" fname=\\\"/DATA/800 ProductionTST/Thumbs.db\\\"][meta sequenceId=\\\"62\\\"] Event: read, Path: /DATA01/SOC/800 ProductionTST/Thumbs.db, File/Folder: File, Size: 6.00 KB, User: dtu, IP: ---\", \"LEGACY_MSGHDR\": \"369 \", \"HOST_FROM\": \"---\", \"HOST\": \"---\", \"FACILITY\": \"user\", \"DATE\": \"Dec 17 17:12:29\", \"@version\": \"1\", \"@timestamp\": \"2016-12-17T16:12:29.507Z\", \"host\": \"127.0.0.1\", \"port\": 35186, \"type\": \"syslog-all\", \"tags\": [ \"Syslog-All\" ] }",
"@version": "1",
"@timestamp": "2016-12-19T05:42:37.237Z",
"host": "---",
"Source_IP": "SOURCEIP"
},
"fields": {
"@timestamp": [
1482126157237
]
},
"sort": [
1482126157237
]
}
When you sent this to elasticsearch without the filter settings, and without the json codec of the input sections, is there a "SOURCEIP" field in kibana?
What you provided is what I asked for and it had some valuable information but we still do not have the "raw" log/format that is being sent to logstash. If you are able, we need to see the actual raw message that is being sent from syslog to logstash.