Newbie question syslog fluentd > kibana (via elastic)


Thanks for reading,

So I'm very new to this, a few work hours in.

I'm looking to be able to visualise my syslogs from a growing hardware security platform.

I have got the basics working...

Hardware > FluentD > elastic > kibana (GREAT !)

But during this process I want to be able to do stats on the 'message' part of the syslog,

Here is the json which fluentd sends to elastic

"_index": "logstash-2017.06.30",
"_type": "fluentd",
"_id": "AVz5DKN2eY2OqXmjH7Vv",
"_version": 1,
"_score": 1,
"_source": {
"host": "DEVICE",
"ident": "kernel",
"message": "[WAN_IN-default-D]IN=pppoe0 OUT=pppoe0 MAC= SRC= DST= LEN=44 TOS=0x00 PREC=0x00 TTL=243 ID=54321 PROTO=TCP SPT=57919 DPT=990 WINDOW=65535 RES=0x00 SYN URGP=0 MARK=0x65000000 ",
"@timestamp": "2017-06-30T13:49:04+01:00"
"fields": {
"@timestamp": [

I'm unsure if I should be doing some config at the fluentd level, or elastic

Or do I just ignore this, and do the processing in Kibana?

Essentially I want to do the stats on what is in the 'message'

Any help, pointers much appreciated



You can use metrics filter plug-in to do stats on the field of the message.

If you want to aggregate and analyse the different fields available under message, you will need to parse them out before you index them into Elasticsearch. This is quite easy to do in Logstash, so I suspect you should be able to do this in FluentD as well, although I must admit I have never used FluentD. Another option might be to create an ingest pipeline in Elasticsearch (assuming FluentD can be configured to send data to an ingest pipeline).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.