Syslog not working

Hello, i get no logs from syslog 2 different syslog server
first i have try with Port 514, but this port is not allowed then i will switch to port 5044 (high port).

ive got this error message in logstash.log
[2018-05-15T14:09:28,837][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Beats ssl=>false, host=>"0.0.0.0", port=>5044, id=>"cd84264b23308b5a3966a309b11ebd5fc938f081d8f60d0d1d04bf29f54cf8be", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0f48bb3e-d183-4713-9c7a-24947da85076", enable_metric=>true, charset=>"UTF-8">, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>4>
Error: event executor terminated
Exception: Java::JavaUtilConcurrent::RejectedExecutionException
Stack: io.netty.util.concurrent.SingleThreadEventExecutor.reject(io/netty/util/concurrent/SingleThreadEventExecutor.java:821)
io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(io/netty/util/concurrent/SingleThreadEventExecutor.java:327)
io.netty.util.concurrent.SingleThreadEventExecutor.addTask(io/netty/util/concurrent/SingleThreadEventExecutor.java:320)
io.netty.util.concurrent.SingleThreadEventExecutor.execute(io/netty/util/concurrent/SingleThreadEventExecutor.java:746)
io.netty.channel.AbstractChannel$AbstractUnsafe.register(io/netty/channel/AbstractChannel.java:479)
io.netty.channel.SingleThreadEventLoop.register(io/netty/channel/SingleThreadEventLoop.java:80)
io.netty.channel.SingleThreadEventLoop.register(io/netty/channel/SingleThreadEventLoop.java:74)
io.netty.channel.MultithreadEventLoopGroup.register(io/netty/channel/MultithreadEventLoopGroup.java:86)
io.netty.bootstrap.AbstractBootstrap.initAndRegister(io/netty/bootstrap/AbstractBootstrap.java:331)
io.netty.bootstrap.AbstractBootstrap.doBind(io/netty/bootstrap/AbstractBootstrap.java:282)
io.netty.bootstrap.AbstractBootstrap.bind(io/netty/bootstrap/AbstractBootstrap.java:278)
io.netty.bootstrap.AbstractBootstrap.bind(io/netty/bootstrap/AbstractBootstrap.java:260)
org.logstash.beats.Server.listen(org/logstash/beats/Server.java:57)
java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:438)
org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:302)
opt.bitnami.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_beats_minus_5_dot_0_dot_13_minus_java.lib.logstash.inputs.beats.run(/opt/bitnami/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-beats-5.0.13-java/lib/logstash/inputs/beats.rb:198)
RUBY.inputworker(/opt/bitnami/logstash/logstash-core/lib/logstash/pipeline.rb:514)
opt.bitnami.logstash.logstash_minus_core.lib.logstash.pipeline.block in start_input(/opt/bitnami/logstash/logstash-core/lib/logstash/pipeline.rb:507)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:289)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:246)
java.lang.Thread.run(java/lang/Thread.java:748)

My syslog.conf in logstash
input {
tcp {
port => 5044
type => syslog
}
udp {
port => 5044
type => syslog
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}

Thats pretty strange. Can anybody help me?

It looks like you already have a beats input that listens on port 5044.

Yes thats right the beats plugin use the same port.
i have change the port to 5500. now i have no error messages put i cant use telnet from my local pc to this port. on my debian system is no firewall active.
Thats pretty strange :slight_smile:
Anybody have an idea?

Does Logstash start up fine? Have you looked in the logs? Have you used netstat to check whether Logstash is actually listening on port 5500?

When i start a TCPdump, i can see the traffic from the syslog client
Uploading...

Logstash log look like this after restart:

[2018-05-16T14:29:52,400][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-05-16T14:29:52,401][WARN ][io.netty.channel.AbstractChannel] Force-closing a channel whose registration task was not accepted by an event loop: [id: 0x39f52092]
java.util.concurrent.RejectedExecutionException: event executor terminated
at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:821) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:327) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:320) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:746) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:479) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:80) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:74) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:331) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.bootstrap.AbstractBootstrap.doBind(AbstractBootstrap.java:282) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:278) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at io.netty.bootstrap.AbstractBootstrap.bind(AbstractBootstrap.java:260) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at org.logstash.beats.Server.listen(Server.java:57) [logstash-input-beats-5.0.13.jar:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_161]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_161]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_161]
at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:438) [jruby-complete-9.1.13.0.jar:?]
at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:302) [jruby-complete-9.1.13.0.jar:?]
at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:36) [jruby-complete-9.1.13.0.jar:?]
at opt.bitnami.logstash.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_beats_minus_5_dot_0_dot_13_minus_java.lib.logstash.inputs.beats.RUBY$method$run$0(/opt/bitnami/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-beats-5.0.13-java/lib/logstash/inputs/beats.rb:198) [jruby-complete-9.1.13.0.jar:?]
at opt.bitnami.logstash.logstash_minus_core.lib.logstash.pipeline.RUBY$method$inputworker$0(/opt/bitnami/logstash/logstash-core/lib/logstash/pipeline.rb:514) [jruby-complete-9.1.13.0.jar:?]
at opt.bitnami.logstash.logstash_minus_core.lib.logstash.pipeline.RUBY$method$inputworker$0$VARARGS(/opt/bitnami/logstash/logstash-core/lib/logstash/pipeline.rb) [jruby-complete-9.1.13.0.jar:?]
at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:77) [jruby-

sorry Logstash and the hole elastic stack looks nice but its too complex.
Splunk is very easy to use!!!!!!!!!
I hate logstash. I've been working on it for 3 days now and no one knows why it does not work.

It is all a matter of experience. I have the same sentiment about Splunk... it is way more difficult than Elastic. However I feel that way primarily because I have more experience with one over the other. Everything has a learning curve, and it can be frustrating until it finally "clicks".

I am actually glad that's the case. If everything was easy, there would no value in mastering it, and we in IT would all be paid a lot less than we are. Embrace difficulty... it is how you set yourself apart from average.

There are people out there that can help in a number of ways. Training, consulting, etc. If you find the right partner to work with I am confident you will find that you can achieve with Elastic everything you can with Splunk... and more... and for a lot less money.

Thank you for your answer. Today I was able to fix the error and I am now receiving syslog messages in kibana. At the moment I try to restrict everything with the grok filter. maybe you have some help for me at the following LOG

<14>1 2018-05-19T13:38:20.147+02:00 localhost XMS - AdminAudit [@18060 app.name="ApplePush" client.ip="172.22.20.53" device.id="F17xxx" device.imei="35 000000 009142 2" device.ownerName="oejo@company.com" device.serial="F2LKTJC6BHFY7" device.userDefined.id="" event.action="PUSH_SOFTWARE_INVENTORY_REQUEST" event.status="DONE" ew.session.id="1818201999-31034" http.user-agent="WorxMailAppStore/10.8.20.14 CFNetwork/897.15 Darwin/17.5.0" push.device="2793" push.id="63715" push.info="[UID=63715,usr=heg@company.com,dev=2793]" push.target="ApplePushTarget[os=iOS, device=2793, user=heg@company.com, type=DEVICE]" push.user="heg@company.com" session.id="38918DDF9400EB5C" user.id=""] {"source":"PUSH_SERVICE","deviceUser":"heg@company.com","deviceIMEI":"35 000000 609917 1"}

Normally i can sort this log in seperat fields?

For a better starting point with Syslog, take a look at this...

Then for examples on how to further enrich data look at this...


It contains a step-by-step walk through of going from a raw log to useful dashboards.

For a far more advanced example of what is possible, you can dig through my ElastiFlow solution...

Rob

Robert Cowart (rob@koiossian.com)
www.koiossian.com
True Turnkey SOLUTIONS for the Elastic Stack

1 Like

Hi Rob,
Can not I use grok filter for syslog?

Wie ich gelesen habe sprichst du auch sehr gut deutsch Robert :slightly_smiling_face:
Grüße aus der Nähe von München :slight_smile:

Sure you can. But the grok filter alone is unlikely to get you where you want to be. For example... the log that you have above will probably benefit from (at a minimum)... grok, syslog_pri, kv

You should really walk through the PDF slides in the eslog_tutorial link above. The things you should be thinking about is...

  1. Handle the basic syslog stuff (priority, timestamp, hostname/ip, etc)
  2. Break the message up into its various parts - I see some key-value pairs and some JSON.
  3. Use the various filters available to turn these parts into document fields.

The tutorial walks through this stuff. I created it because I went through the same beginner frustrations that you are experiencing, and I wanted to help others more easily learn it more quickly. Usually I would present it in a meetup format, walking through each step. Everything you need is there... The Logstash pipelines (at each step), the index templates, the Kibana dashboards, sample data, etc.

If you spend a few hours really digesting that material, you should see that your log is not as intimidating as it might first appear to be.

Und Ja, Deutsch kann Ich auch. Aber Ich bin Ami und schreibe lieber auf Englisch :wink:

OK verstehe, aber ich habe das ganze system noch nicht zu 100% verstanden. Das ist mir noch zu hoch dein Produkt.
OK, but I have not understood the whole system 100%. That's still too high for your product.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.