Rsyslog logstash input

I installed on CentOS-7 an ELK stack according to a tutorial. This worked all fine and I can see and search the logs of the localhost server itself in the Kibana interface.

The purpose of the server is to act as a centralized remote log sever with for rsyslog. So, I used one of my regular machines to install and configure rsyslog there to use UDP port 514:

/etc/rsyslog.conf
<>

Provides UDP syslog reception

#$ModLoad imudp
#$UDPServerRun 514
$ModLoad imudp
$UDPServerRun 514
</>

With tcpdump on the ELK-server I can see the packets are incoming:

tcpdump -i p3p2 | grep ekgen7

This guide mentions to create template files on the ELK logstash server (ekgen9):

vim /etc/rsyslog.d/70-output.conf

# This line sends all lines to defined IP address at port 10514
# using the json-template format.

*.*                         @127.0.0.1:10514;json-template

vim /etc/rsyslog.d/01-json-template.conf

template(name="json-template"
  type="list") {
    constant(value="{")
      constant(value="\"@timestamp\":\"")     property(name="timereported" dateFormat="rfc3339")
      constant(value="\",\"@version\":\"1")
      constant(value="\",\"message\":\"")     property(name="msg" format="json")
      constant(value="\",\"sysloghost\":\"")  property(name="hostname")
      constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
      constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
      constant(value="\",\"programname\":\"") property(name="programname")
      constant(value="\",\"procid\":\"")      property(name="procid")
    constant(value="\"}\n")
}

The problem is:
But there are still no messages shown in Kibana.

I might have here a problem with the logstash filter. Is that the correct port (10514)? Where does this "port shift" occur? I am not sure how the ports belonging to logstash are chosen.

curl -XGET 'http://localhost:9200/logstash-*/_search?q=*&pretty'

doesn't show anything about rsyslog, I only see:

{
  "took" : 0,
  "timed_out" : false,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 0,
    "max_score" : 0.0,
    "hits" : [ ]
  }
}

Another web page suggested to create another logstash.conf, defining an input filter for port 10514:

vim /etc/logstash/conf.d/logstash.conf

input {                                                                                      
  udp {                                                                                      
    host => "127.0.0.1"                                                                      
    port => 10514                                                                            
    codec => "json"                                                                          
    type => "rsyslog"                                                                        
  }                                                                                          
}                                                                                            
                                                                                             
                                                                            
# The Filter pipeline stays empty here, no formatting is done.                                                                                           filter { }                                                                                   
                                                                                             
                   
# Every single log will be forwarded to ElasticSearch. If you are using another port, you should specify it here.                                                                                             
output {                                                                                     
  if [type] == "rsyslog" {                                                                   
    elasticsearch {                                                                          
      hosts => [ "127.0.0.1:9200" ]                                                          
    }                                                                                        
  }                                                                                          
} 

Is 9200 the default port for Elastichsearch? At least I defined it to this in
an Elasticsearch output filter, following the first guide:

vim /etc/logstash/conf.d/30-Elasticsearch-output.conf

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

But that only seems to pick up the localhost logs.

I am pretty sure that I am doing a simple/stupid mistake here. I am entirely new to ELK - not to system administration, but up to now delved in local logs.

For security reasons I want to establish this centralized log server, and I am all in favour of open source, so I chose ELK.

Any help is highly appreciated.

Best wishes,

Sven

I would try to remove the UDP host. It could just be listening to localhost events only. The default for host is 0.0.0.0 so if you leave it blank I believe you will listen to all hosts on that port. I could be mistaken though and don't understand what happens if you set this to localhost.

Most configurations I see never set the host field.

  udp {                                                                                      
    port => 10514                                                                            
    codec => "json"                                                                          
    type => "rsyslog"                                                                        
  }

Thank you for your input, aaron.

I removed the host entry, but that didn't solve the problem. Ironically, the Kibana of the localhost logs tells me something:

[System][syslog] logstash - [2021-11-09T17:55:42,947][WARN ][logstash.outputs.Elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::Elasticsearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

So, either there isn't the necessary service running, but I did start them in the correct order:

systemctl start logstash; systemctl start elasticsearch; systemctl start nginx; systemctl start kibana

Or the port is wrong. This is the tutorial I was following - which used the afore-mentioned host => 127.0.0.1

I suspect that I got the ports wrong.

netstat -na | grep 10514
udp 213504 0 0.0.0.0:10514 0.0.0.0:*

suggests that the listening port is 10514, but the "default" rsyslog-port is 514.

I changed rsyslog on the server whose logs I want to receive to 10514, too,

/etc/rsyslog.conf
*.* @@192.168.100.109:10514

but that didn't help.

And I don't see the traffic in tcpdump:
tcpdump portrange 10514 -i p3p2 -v
tcpdump: listening on p3p2, link-type EN10MB (Ethernet), capture size 262144 bytes

The error abouve suggests that the output filter to Elasticsearch fails, so that the port mismatch might be there. But I didn't choose the ports myself, so they should match the example:

lsof -i -P -n | grep LISTEN | grep 9600
java      14609      logstash  292u  IPv6 435497200      0t0  TCP 127.0.0.1:9600 (LISTEN)
lsof -i -P -n | grep LISTEN | grep 9200
java      14633 elasticsearch  572u  IPv6 435487824      0t0  TCP [::1]:9200 (LISTEN)

Changing everything (remote rsyslog-UDP port, logstash json) back to 514, gives me at least the reception of the rsyslog packages on the logging host:

tcpdump portrange 514 -i p3p2 -v

I fear, I need more experience with it. Strangely, Port 9200 is only listened on ::1, i.e. IPv6.

IPv4 seems to listen on port 9600:

netstat -na | grep 127.0.0.1
tcp        0      0 127.0.0.1:199           0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:6011          0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:6012          0.0.0.0:*               LISTEN     
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN     
udp        0      0 127.0.0.1:123           0.0.0.0:*                          
udp        0      0 127.0.0.1:659           0.0.0.0:*                          

I changed the port to 9600 in logstash.conf. Then I get this error after restarting all services:

"{"message":"all shards failed: [search_phase_execution_exception] all shards failed","statusCode":503,"error":"Service Unavailable"}"

It might be better to create another topic in the Elasticsearch category for underlying issues first. Not sure the Logstash category is a good place to troubleshoot those specific issues.

Ok, I can do that. I thought that logstash is the low-level part of the stack that picks up the logs.

Therefore, I suspected that the logstash filter didn't pick up the rsyslog messages. Hence, I put it in this section.

The latter error of the Elasticsearch output filter of course might suggest a different cause now.

Should reopen a new topic or can this entry be moved by the moderators?

Well maybe I don't understand what's going on but it appears you aren't sure if the cluster is up and running correctly.

Is this all on the same host? Elasticsearch, Kibana, and Logstash? Or are they installed on different nodes? If it's all the same then all default settings should work.

What do you get when you systemctl status elasticsearch or view the logs? Any errors?

This is why I was suggesting to create another topic there. This would be if the cluster and services aren't up and running. Brand new, not move.

The ELK stack is all installed on one node:

  • logstash
  • Elasticsearch
  • kibana
  • nginx

The other node is the first node I am trying to pick up the logs via rsyslog from.

In principle I have all the services running, and could search logs in the Kibana interface.

I do get occassionally the error "Kibana server is not ready yet", when restarting the stack. Another forum suggest to do

curl -XDELETE http://localhost:9200/.kibana*

I am not sure what that deletes, and the method is not 100% reliable. After a few tries I get the Kibana interface, and can go to "Logs" to search the logs.

When I enter host.name:"ekgen7" in the search field, there are no results.

Can I check the Elasticsearch database by hand, to see if entries are enter there? I still don't know if the error lies with logstash picking up the rsyslog messages or the output filter into the Elasticsearch database.

Displaying in Kibana should then be straight forward, if the entries for host "ekgen7" are present in eleasticsearch.

As mentioned the "ekgen9" syslog, the host where the ELK stack is running, shows:

[System][syslog] logstash - [2021-11-10T14:16:30,658][ERROR][logstash.inputs.udp      ] UDP listener died {:exception=>#<Errno::EACCES: Permission denied - bind(2) for "0.0.0.0" port 514>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:213:in `bind'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:116:in `udp_listener'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:68:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:426:in `inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:420:in `block in start_input'"]}

That is the strongest hint I have that the error lies with logstash, and apparently it can't bind 0.0.0.0 to port 514 - to this port host "ekgen7" sends the rsyslogs.

If you are running logstash as a system service, with systemd for example, it is running as the logstash user, which does not have privileges to bind to lower ports (below 1024), so you won't be able to bind to port 514.

You should bind to a higher port, for example 1514 or 5514, and configure your devices to send logs to this port.

If you can't change the ports in the devices, I would say that is better to listen with rsyslogd in the same server and configure a redirect to the logstash port.

Thanks. I didn't think about that - and didn't realize that the services aren't running as root.

That's why in the original guid port 10514 was used. I changed the port back to 10514. This makes the bind error disappear.

Still, I can't see the rsyslog messages in the Elasticsearch/kibana log queries.

Neither, using rsyslog with port 514 or port 10514 on the server whose logs I try to collect.

the first thing to do will be checking whether logstash receives logs from rsyslog. there are two ways to do this:

  1. by default logstash will expose input, filter and output metrics through monitoring port 9600. in your case it looks like it binds to 127.0.0.1 so curl -XGET http://127.0.0.1/_node/stats/pipelines should give you information on whether you received data in the logstash input

  2. change your output to stdout {} or to a file.

once you’re sure that logstash receives that from rsyslog then you can proceed with troubleshooting connectivity to Elasticsearch.

standard logstash config will send to localhost on port 9200(Elasticsearch http port) so if you’re deploying all stack in a single machine, it should all work on standard installation.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.