Logstash output changing field location randomly

I am pulling in logs from Kafka and sending them out to Elasticsearch. I have been getting this set up over the last few weeks and everything seems to be working as expected. Today I noticed every time I start the service ( .../bin/logstash -f .../conf.d/kafka.conf ) I see that logstash is interpreting the fields in a different order.

input {
kafka {
bootstrap_servers => ["kafka_server_ip:9092"]
topics => ["topic1"]
add_field => { "topic" => "topic1" }
codec => json {
charset => "ISO-8859-1"
}
}
}
output {
#I have a few conf files this places the right log into the right index
if [topic] == "topic1" {
elasticsearch {
hosts => ["http://1.1.1.1:9200"]
index => "index1"
}
}
#for testing
stdout {}
#also sending a copy to Splunk
tcp {
host => "2.2.2.2"
port => 5514
codec => "json"

    }

}

Raw Log going in:
{"logDateTime":"06/12/2019 09:17:59:143","eventDateTime":"06/12/2019 09:17:06:247","sourceIp":"127.0.0.1","applicationIdentifier":"1234567","userIdentity":"Matt_Test","eventType":"eventType","eventSeverity":"6","action":"action","result":"SUCCESS","reason":"reason"}

Logstash stdout:
{
"result" => "SUCCESS",
"reason" => "reason",
"eventDateTime" => "06/12/2019 09:17:06:247",
"eventType" => "eventType",
"applicationIdentifier" => "1234567",
"topic" => "topic1",
"userIdentity" => "Matt_Test",
"@timestamp" => 2019-06-12T14:18:06.448Z,
"sourceIp" => "127.0.0.1",
"logDateTime" => "06/12/2019 09:17:59:143",
"eventSeverity" => "6",
"@version" => "1",
"action" => "action"
}
Restart the service and I see:
{
"reason" => "reason",
"@timestamp" => 2019-06-12T14:41:03.771Z,
"eventSeverity" => "6",
"logDateTime" => "06/12/2019 09:40:59:143",
"eventDateTime" => "06/12/2019 09:40:06:247",
"@version" => "1",
"topic" => "topic1",
"action" => "action",
"result" => "SUCCESS",
"sourceIp" => "127.0.0.1",
"eventType" => "eventType",
"userIdentity" => "Matt_Test",
"applicationIdentifier" => "1234567"
}

1 Like

This is also true for a file input, but not a generator input. It also occurs if you replace the codec with a json filter. Not sure why it happens. It is not as simple as saying a hash is not ordered, because Ruby should iterate over a hash in the order of insertion, and I would expect a json parser to insert fields in the order of appearance.

So you would consider this normal behavior? Nothing I can do about it?

I am not aware of any way to prevent it.

Ok I'll have to work around it. Thanks for helping!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.