LumberJack - timestamp got added to JSON message

Dear Experts:

I have filebeat send logs (JSON messages) to the first Logstash server. From the first Logstash server I forward the logs to the second Logstash server using LumberJack.

The connectivity is working fine in that the logs are sent from the first to the second LS server successfully. However, in Kibana the logs (or the "message" in JSON format) cannot be parsed. It looks like the message is appended with the timestamp and the beat hostname at the beginning of the message. Example:

2018-07-03T15:56:08.489Z BeatHostName-01S {"Message":"System Started.","CreateDateUtc":"2018-07-03T09:40:22.8136608-06:00","TrackingId":"12345-56X9-491B-8D0B-9148FB8A0123","AppId":"09basdf-56bx-431b-8d0basdfasdf20151"}

Where 2018-07-03T15:56:08.489Z is the timestamp and BeatHostName-01S is the beat hostname.

My question is: how do I remove the timestamp and the beat hostname, or prevent it from being added to the original message?

Here are my Logstash config files on each Logstash server:

On the first Logstash server:

input {
beats {
port => 5044
}
}
output {
lumberjack {
hosts => ["secondLSserver"]
port => 1234
ssl_certificate => "c:/logstash.pub"
}
stdout { codec => rubydebug }
}

On the second Logstash server:

input {
lumberjack {
type => "MessageType"
port => 1234
ssl_certificate => "c:/logstash.pub"
ssl_key => "c:/logstash.key"
}
}

filter {
json {
source => "message"
}
}

output {
elasticsearch {
hosts => ["http://esserver:9200"]
manage_template => false
index => "MyIndex-%{+YYYY-MM}"
}
}

Start by setting codec => json for both lumberjack plugins.

Thanks for responding Magnus.
I put codec => json in the first Logstash server under the output
I put codec => json in the second Logstash server under the input
However on the second Logstash server I see the error "_jsonparsefailure" and the "message" field still contain the timestamp and the beat hostname before the json message as follows:

Output from the first Logstash:

{
"@timestamp" => 2018-07-03T20:48:52.120Z,
"offset" => 837,
"appname" => "myAppName",
"@version" => "1",
"beat" => {
"hostname" => "BeatHostName-01S",
"name" => "BeatHostName-01S",
"version" => "x.x.x"
},
"input_type" => "log",
"host" => "BeatHostName-01S",
"source" => "sourcefile.log",
"message" => "{"Message":"System Started.","CreateDateUtc":"2018-06-25T23:50:32.3093567-06:00","TrackingId":"asdf9asdf9a9dfas9df9da88a7777""}",
"type" => "MyType",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}

Output from the second Logstash:

{
"@timestamp" => 2018-07-03T20:49:16.262Z,
"@version" => "1",
"message" => "2018-07-03T20:48:52.120Z BeatHostName-01S {"Message":"System Started.","CreateDateUtc":"2018-06-25T23:50:32.3093567-06:00","TrackingId":"asdf9asdf9a9dfas9df9da88a7777""}",
"type" => "MyType",
"tags" => [
[0] "_jsonparsefailure"
]
}


Here are the new config files:

On the first Logstash server:

input {
beats {
port => 1234
}
}
output {
lumberjack {
hosts => ["secondLSserver"]
port => 1234
ssl_certificate => "c:/logstash.pub"
codec => json
}
stdout { codec => rubydebug }
}

On the second Logstash server:

input {
lumberjack {
port => 1234
ssl_certificate => "c:/logstash.pub"
ssl_key => "c:/logstash.key"
codec => json
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["http://esserver:9200"]
manage_template => false
index => "myindex-%{+YYYY-MM}"
}
}


Another thing that I can see: from the output on the first Logstash server, I can see the "message" field as original from the source log with no timestamp and beat hostname in front of it. But they are on the second Logstash server. So looks like when the first Logstash server forward to the second Logstash server, it also attached these 2 fields in front of the original message, which I don't want. Or better yet, how do I parse those 2 fields with the org message?

Thanks.

I'm not sure what's up here. Your configuration looks correct and the only time I've seen similar problems is when the codec has been wrong. I'd try to debug the situation systematically by simplifying the configuration as much as possible to try to narrow things down.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.