Parsing syslog from linux rsyslog

Hi,

I setup ELK stack on my centos machine. In addition, I'm getting syslogs from rsyslog of another centos, So I can see it with "tcpdump" but I wanna see that on Kibana. I think my problem is "logstash.conf file".
I couldn't configurate correctly. So how should I configurate my logstash.conf file? Are there any example ? I couldn't find it. Please some help.

Thanks a lot for any interest.

Best regards.

It'd help if you could post what you have set in your config already.

I simply configure rsyslog to forward data to logstash using RSYSLOG_ForwardFormat template

action(type="omfwd" target="logstash-ip" port="51400" protocol="udp" template="RSYSLOG_ForwardFormat")

and my logstash.conf input is configured like that:

input {
    syslog {
      type => "syslog"
      port => 51400
     }
 }
1 Like

Also I need to some filter conf?

Here's an example from the Logstash documentation that should be very close to what you need: https://www.elastic.co/guide/en/logstash/current/config-examples.html#_processing_syslog_messages

I already tried that one. Unfortunately it doesn't work

If you can be a bit more specific than "it doesn't work" maybe someone will help you.

  • What, exactly, have you tried?
  • What result do you get?
  • What result did you expect?
1 Like

Actually, I tried that conf;

input {
tcp {
port => 514
type => syslog
}
udp {
port => 514
type => syslog
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}

Whats wrong with me ?

Also I looked with "tcpdump" and syslogs are coming. I can know that. But its not parsing.. I couldn't see on Kibana

But that's not quite the configuration from the example; you're using port 514 instead of port 5000. Unless you're running Logstash as root (or use a workaround) that won't work and Logstash should be complaining about this in the log.

I already changed LS_USER from etc/sysconfig/logstash, I put LS_USER=root , it was "LS_USER=logstash". When I tried before this changing, service of logstash exited. But now
service of logstash is runnning.

You mean that its still a problem?

Again, you need to read Logstash's logs. You may have to crank up the loglevel by adding --verbose or --debug to the Logstash command which also can be done via /etc/sysconfig/logstash.

1 Like

Actually I didn't understand how to do that? Could you give me more details to what I should do exactly? I hope that I will solve my problem with your advice.

Change the

#LS_OPTS=""

line to e.g.

LS_OPTS="--verbose"
1 Like

I tried that one too. But I still couldn't see on Kibana. Syslogs are not coming to Kibana ? Here I copied my file of "/etc/sysconfig/logstash"
Please check for me, I don't know I need to change something more? , I'm confused about that.
Thanks for your interest and helping me

###############################
# Default settings for logstash
###############################

# Override Java location
#JAVACMD=/usr/bin/java

# Set a home directory
#LS_HOME=/var/lib/logstash

# Arguments to pass to logstash agent
LS_OPTS="--verbose"

# Arguments to pass to java
#LS_HEAP_SIZE="500m"
#LS_JAVA_OPTS="-Djava.io.tmpdir=$HOME"

# pidfiles aren't used for upstart; this is for sysv users.
#LS_PIDFILE=/var/run/logstash.pid

# user id to be invoked as; for upstart: edit /etc/init/logstash.conf
LS_USER=root

# logstash logging
#LS_LOG_FILE=/var/log/logstash/logstash.log
#LS_USE_GC_LOGGING="true"

# logstash configuration directory
#LS_CONF_DIR=/etc/logstash/conf.d

# Open file limit; cannot be overridden in upstart
#LS_OPEN_FILES=16384

# Nice level
#LS_NICE=19

# If this is set to 1, then when `stop` is called, if the process has
# not exited within a reasonable time, SIGKILL will be sent next.
# The default behavior is to simply log a message "program stop failed; still running"
KILL_ON_STOP_TIMEOUT=0

And what do the Logstash logs contain after you've changed LS_OPTS and restarted Logstash? You might also want to check whether Logstash is actually listing on port 514. Use e.g. netstat for that.

I checked "netstat" so port 514 is listening. Also when I looked "tcpdump", syslogs are coming. I can see that. My problem is parsing but I dont know why our conf doesn't work.

Are the messages actually reaching Logstash? If you disable the elasticsearch output for now to simplify the system, are you getting output to stdout (probably connected to /var/log/logstash/logstash.stdout or similar)? What if you re-enable the elasticsearch output?

There is a lot of logs but why kibana doesn't show it?

That's what I'm trying to help you figure out, but if you don't answer my questions I can't help you.

Yes, the messages are reaching Logstash. You said that "disable the elastic search" which means in the logstash.conf file, about output now ;

output{
   elasticsearch { host => localhost }
   stdout { codec => rubydebug }
}

You mean that changing like this?

 output{
                  stdout { codec => rubydebug }
    }

after that trying again to kibana?

or just disable elasticsearch.service like this ?
systemctl disable elasticsearch