Collecting from multiple sources


Kind of new to all this so hang in there.

ELK running on Centos 7

Winlogbeat sending logs from several windows servers to ELK

When I try to index logstash (rsyslog), I dont see any data.

here is the input:
input {
beats {
port => 5044
udp {
host => ""
port => 10514
codec => "json"
type => "rsyslog"

Here is the output:
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
if [type] == "rsyslog" {
elasticsearch {
hosts => [ "localhost:9200" ]

I can see winlogbeat data in kibana UNTIL I add the UDP section to the inputs and add the IF [TYPE]==rsyslog to the outputs.

The rsyslog stuff is shipped to the ELK which runs rsyslog. I have verified that the ASA Firewall logs are getting to the ELK stack. If I look at indices I see:

[root@brt1-log01 rsyslog.d]# curl -XGET http://localhost:9200/_cat/indices
yellow open winlogbeat-2017.01.17 5 1 111149 0 136.9mb 136.9mb
yellow open winlogbeat-2017.01.18 5 1 341404 0 386.2mb 386.2mb
yellow open .kibana 1 1 104 1 104.7kb 104.7kb
yellow open %{[@metadata][beat]}-2017.01.18 5 1 19877 0 3.9mb 3.9mb
yellow open logstash-2017.01.18 5 1 0 0 737b 737b

The last entry is the rsyslog stuff, I assume. That last 2 numbers are incrementing.

What am I doing wrong?

-Mike M

Your input config doesn't need the host line, that is to define what IP that Logstash is listening on. If you have multiple NICs, then you might need it, but I doubt it.

To see if the syslog messages are really arriving, do a:

tcpdump -n -A src x.x.x.x

That will show the ASCII text of the syslog messages.

If I tail the rsyslog log file I can see entries coming from the ASA firewall. I cant seem to see them in elastic search or kibana.

Can you sanitize a copy of a tcpdump log or two and post it here?

Also, what does your ASA logging config section look like?


If I can see the entries from the ASA in /var/log/remote-hosts can I not be convinced that the logs are getting there?

i think I need to:

  1. Verify logstash is getting them
  2. Verify that they are then getting to elasticsearch

Correct? If so, How do I do that?

Thanks for the help so far
-Mike M

Using kibana sense, I can see this:

Think I am making some progress, at least on troubleshooting front;

Here is the input:

When I start logstash as a foreground process using:

/opt/logstash/bin/logstash -f /etc/logstash/conf.d/02-beats-input.conf --verbose

I get:
Beats inputs: Starting input listener {:address=>"", :level=>:info}
Starting pipeline {:id=>"base", :pipeline_workers=>2, :batch_size=>125, :batch_delay=>5, :max_inflight=>250, :level=>:info}
Pipeline started {:level=>:info}
Logstash startup completed
Starting UDP listener {:address=>"", :level=>:info}

So, it appears that the UDP pipeline on 10514 never starts. How can figure out why? netstat shows that as:
netstat -na | grep 10514
udp 0 0*

Does the above info get anyone closer to figuring out why this is not working?

-Mike M


--debug shows the rsyslog data when I run logstash --debug
output received {:event=>{"@timestamp"=>"2017-01-20T13:40:09.000Z", "@version"=>"1", "message"=>" Deny udp src inside: dst backup: by access-group "acl_in" [0x0, 0x0]", "sysloghost"=>"", "severity"=>"warning", "facility"=>"local7", "programname"=>"%ASA-4-106023", "procid"=>"-", "type"=>"rsyslog", "host"=>""}, :level=>:debug, :file=>"(eval)", :line=>"22", :method=>"output_func"}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.