Syslog input filling disk

Hi All,

I have possible a dumb question... :-). We have 4 logstash servers running and are configured to receive syslog messages. But, the catch is that apparently logstash doesn't send the logs directly (parsed) to elasticsearch, but instead keeps them on the disk, causing off course the disk filling up.
/srv/log/messages: 1.4G
/srv/log/user.log: 1.4G

This: https://www.balabit.com/sites/default/files/documents/syslog-ng-ose-latest-guides/en/syslog-ng-ose-guide-admin/html/configuring-destinations-elasticsearch.html... Is not a option, because of the different types (Solaris, Linux,...) Servers that are sending their logs to logstash.

So is there a way that the logs get to logstash, getting parsed and are send directly (like all the other logs) to elasticsearch?

Normally Logstash hardly uses any disk at all. Its own log should be mostly silent and only report problems. Who or what is producing /srv/log/{message,user}.log? Those are non-standard files so I can only assume that you yourself have configured Logstash to create them.

Hi, thanks for your reply. This is my logstash config for syslog:
input {
tcp {
port => 514
type => syslog
}
udp {
type => syslog
port => 514
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
add_tag => [ "syslog" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
if "_grokparsefailure" in [tags] {
null {}
}
elasticsearch {
hosts => ["elasticsearch1:9200"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

All other log input goes directly to elasticsearch... Just the servers that are not using filebeat and are using syslog to forward their logs to logstash are stranded

Again, who or what is creating the two files in /srv/log? If Logstash indeed is running with the configuration above it's not Logstash creating the files. What's in them?

If the point of your null output is top drop events with the _grokparsefailure tag it's not working. Either use a drop filter or something like this in the output section:

if "_grokparsefailure" not in [tags] {
  elasticsearch  {
    ...
  }
}

Ok, found it... It was a faulty configured rsyslog file... Thanks!
And big thanks for the extra _grokparsefailure!!