Good evening. This is my first post here, so forgive me if I'm missing some rules here. I'll try to be as concise as possible:
Issue:
I am not able to get logstash to ingest Syslog files from servers using the syslog sender.
I currently have some servers sending in using filebeat, and can see those event stored properly.
What I've tried:
- I tried different ports, TCP and UDP in the logstash config
- I've verified nothing is running on 514 TCP/UCP if logstash is stopped
- I've tried Syslog input and TCP/UDP inputs.
- I've set root as the user to launch logstash in default/logstash
- I've used setcap to allow java to fire off privliedges ports (in the case of UDP I just can't get it to listen)
- I've run captures to see if the server is listening and I'm seeing all the traffic inbound on 514 both TCP and UDP from different hosts coming in.
- I've set LS_JAVA_OPTS="-Djava.net.preferIPv4Stack=true" in /default/logstash
- I've tried adding and removing the elasticsearch output for all my conf.d files.
Probably more as it's been a few days now, but my elbows hurt so I'm trying to wrap this up...
My versions:
Logstash: 2.3
Java: java-1.8.0-openjdk-amd64
OS: Debian Jessie (8.6)
What I'm hoping to get out of this:
I am trying to compare ELK to Splunk and am finding ELK worked fine on a different build, but slowly disintegrated. I was using this tutorial to do the initial build. It worked fine, then when I tried to build on another machine it never worked properly again.
While Splunk is expensive, I've found it considerably easier to setup. I'm hoping this is just something simple I'm missing. I'm relatively well versed in Linux, but by no means an expert, so any help would be appreciated.
I have a growing feeling it's somehow to do with the elasticsearch output...
Some errors I encountered along the way
When setting up input UDP I am getting this error and have yet to get the system listening on 514/UDP:
{:timestamp=>"2016-12-09T16:22:45.607000-0800", :message=>"UDP listener died", :exception=>#<IOError: closed stream>, :backtrace=>["org/jruby/RubyIO.java:3682:in `select'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.5/lib/logstash/inputs/udp.rb:77:in `udp_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.5/lib/logstash/inputs/udp.rb:50:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:334:in `inputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:328:in `start_input'"], :level=>:warn}
My config files:
02-beats-input.conf
input {
beats {
port => 5044
}
}
10-syslog-filter.conf
input {
syslog {
port => 514
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
That's about it. Thanks!
Chris