Hello!
I'm new to Logstash, and I'm trying filtering and mutating input logs from Filebeat.
First of all, I have read this article, and I know that I cannot use dot separated field names.
Field name cannot contain dots
My question is that how can I change json fields automatically from an input json log?
The example input is the NFCT json file this ulogd stack
stack=ct1:NFCT,ip2bin1:IP2BIN,jsonnfwct:JSON
and this is what made from
{
"timestamp": "2019-03-06T05:21:51",
"dvc": "devicename",
"orig.ip.protocol": 17,
"orig.l4.sport": 60770,
"orig.l4.dport": 1900,
"orig.raw.pktlen": 0,
"orig.raw.pktcount": 0,
"reply.ip.protocol": 17,
"reply.l4.sport": 1900,
"reply.l4.dport": 60770,
"reply.raw.pktlen": 0,
"reply.raw.pktcount": 0,
"ct.mark": 0,
"ct.id": 3190284752,
"ct.event": 1,
"flow.start.sec": 1551849711,
"flow.start.usec": 151034,
"oob.family": 2,
"oob.protocol": 0
}
As you see, It emits dotted field name that is json form.
And to recognize by Kibana, It has to be changed to another word.
Here is what I want
{
"timestamp": "2019-03-06T05:21:51",
"dvc": "devicename",
"orig": {
"ip": {
"protocol": "17"
},
"l4": {
"sport": "60770",
"dport": "1900"
},
"raw": {
"pktlen": "0",
"pktcount": "0"
}
},
"reply": {
"ip": {
"protocol": "17"
},
"l4": {
"sport": "1900",
"dport": "60770"
},
"raw": {
"pktlen": "0",
"pktcount": "0"
}
},
"ct": {
"mark": "0",
"id": "3190284752",
"event": "1"
},
"flow": {
"start": {
"sec": "1551849711",
"usec": "151034"
}
},
"oob": {
"family": "2",
"protocol": "0"
}
}
Actually, before I ask this, I was trying to use the gsub module to replace '.' to '_'
The configuration pipeline was this
filter {
mutate {
gsub => [ "message", "\.+", "_" ]
}
}
But it changed every '.' to '_', now those are just single values without the relationship.
Can I make it?
Thanks in advance.