Fluentd sending to Logstash leads to lot of garbage data

Hi

We have fluentd which is sending logs from client server to our main ELK Server.

In the Kibana, when i read the logs, there is lot of garbage data that comes along with the message.

On the Logstash log, server we are using fluent plugin for input.

\x92\xACsys.messages\xDB\u0000\u0000\u0003\u001A\x92\xCEU\xC1\x9FY\x84\xA4host\xAFip\xA5ident\xA9freshclam\xA3pid\xA48721\xA7message\xDA\u00009ClamAV update process started at Wed Aug 5 11:00:01 2015\x92\xCEU\xC1\x9FY\x84\xA4host\xAFip-10-20-12-209\xA5ident\xA9freshclam\xA3pid\xA48721\xA7message\xDA\u0000Nmain.cvd is up to date (version: 55, sigs: 2424225, f-level: 60, builder: neo)\x92\xCEU\xC1\x9FZ\x84\xA4host\xAFip-

Any help will be appreciated. The message comes but it's like encoded between lot of garbage data.

Our Configuration of Logstash :-1:
input {
syslog {
host => "0.0.0.0"
port => 5141
}
}

output {
stdout { }
elasticsearch {
}
}

which output of fluentd are you using and why not just send directly to elasticsearch?

-- Asaf.

Actually, we were on the plans to do the parsing of Log data from the Logstash ( Server Side ) instead of doing it on the individual client which is running Fluentd.

These garbage data might be due to Logstash ? We currently do not do any parsing at logstash. Logs just arrive at Logstash and it sends it to Elastic Search.

Looks like an encoding issue, what is generating the logs?