Logstash file output plugin injecting extra characters into events

hi there

we are using logstash for logging syslog events to file and forward to elastic.

Our configs are s follows:

input {
        syslog {
        id => "Syslog_Data"
        port => 5514
        type => syslog
        codec => plain{ charset => "ISO-8859-1" }
        tags => ["noelastic"]
        add_field => { "client-service" => "syslog" }
        }
}



output {

if ([type] =~ /^syslog$/) and ("noelastic" not in [tags]) {
                elasticsearch
                {
                 id => "currently_not_used"
                 hosts => ["els01:9200", "els03:9200","els04:9200","els05.:9200"]
                 index => "syslog-%{+YYYY.MM.dd}"
                }
        }

if ([type] =~ /^syslog$/) and ("nosavelogs" not in [tags]) {
        if ("local4" in [facility_label]) {
                file {
                path => "/logstash-data/nfs-service/%{client-service}/%{+YYYY}/%{+MM-YYYY}/%{+dd-MM-YYYY}/%{host}/syslog_log-%{+YYYYMMdd}"
                codec => line { format => "%{timestamp} %{host} %{logsource} %{program} %{pid} : %{message}" }
                id => "syslog_logs"
                }

}}

But we sometime get extra characters injected at the start of the events when written to file.

We dont see those extra characters, when we explore same data in kibana, which points us to File output plugin of logstash.

For example see Sample data in file with extra characters at the start of event
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@Dec 19 10:01:04 a.b.c.d Oracle Audit 8009 : LENGTH: "231" SESSIONID:[7] "7425390" ENTRYID:[1] "1" USERID:[6] "xyz" ACTION:[3] "101" RETURNCODE:[1] "0" LOGOFF$PREAD:[3] "253" LOGOFF$LREAD:[4] "1884" LOGOFF$LWRITE:[3] "848" LOGOFF$DEAD:[1] "0" DBID:[10] "155789743982" SESSIONCPU:[2] "10"

while the same event in kibana is without any extra characters at the start.

Can somebody please guide on this.

Thanks

In the pipeline detail you provide it states "nfs-service". Are you writing to an NFS service and how many logstash hosts are writing to this NFS service?

hi Bloke

you are right, We have about 8 hosts writing to a shared nfs location.

NFS writing with 2 or more nodes is going to bring out the bugs. I have seen this before with RHEL and NFS. I would drop back to 1 x NFS writer node. This is not a logstash bug rather "others"

Change your design from multiple

output {
  file { path => "nfs" }
}

to multiple

output {
  http { }
}

with a single NFS writer node

you can have multiple (pipelines)

input {
  http { }
}

and multiple (pipelines)

output {
  file { path => "nfs" }
}

Its what i ended up doing. I think i even did that linkage with lumberjack at one stage but found it to be a little slower.

Good luck with it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.