Logstash not sending syslog to elasticsearch

Can you post the config that you generated those logs off of along with the command you used to start it up?

Asking because I don't see TCP in the logs and the config file which the logs say is being used is not the same as your first post. /etc/logstash/logstash-sample.conf vs /etc/logstash/logstash.conf.

What I see from the logs is Logstash is listening to UDP port 5144 on all IPs and nothing is being read so nothing is being outputted.

Aaron;

What you are seeing is the correct logstash.conf file (/etc/logstash/logstash-sample.conf) I didn't put the full name in the thread as it isn't that important, or I didn't think it was. Find below my full configuration file.

input {
    udp {
        port => 5144
        type => syslog
        }
   tcp {
      port => 5144
      type => syslog
     }
}

output {
     stdout { }
}

Filename isn't important but we have seen multiple times people starting the wrong .conf so that's why I ask.

Looks like only UDP is starting and no data is being received and you have a TCP error but possibly missing the part that tells why it didn't start.

The systems that we are expecting syslogs from are only configured to send via UDP. I did not open the firewall to receive flow from port 514.

So to know that UDP is starting is a good thing, as that is the important port

@aaron-nimocks
here is some of the TCP entries.

[2021-11-12T11:06:38,833][DEBUG][org.reflections.Reflections] expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Codec
[2021-11-12T11:06:38,834][DEBUG][org.reflections.Reflections] expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Input
[2021-11-12T11:06:38,834][DEBUG][org.reflections.Reflections] expanded subtype org.jruby.RubyBasicObject -> org.jruby.RubyObject
[2021-11-12T11:06:38,834][DEBUG][org.reflections.Reflections] expanded subtype java.lang.Cloneable -> org.jruby.RubyBasicObject
[2021-11-12T11:06:38,834][DEBUG][org.reflections.Reflections] expanded subtype org.jruby.runtime.builtin.IRubyObject -> org.jruby.RubyBasicObject
[2021-11-12T11:06:38,834][DEBUG][org.reflections.Reflections] expanded subtype java.io.Serializable -> org.jruby.RubyBasicObject
[2021-11-12T11:06:38,834][DEBUG][org.reflections.Reflections] expanded subtype java.lang.Comparable -> org.jruby.RubyBasicObject
[2021-11-12T11:06:38,834][DEBUG][org.reflections.Reflections] expanded subtype org.jruby.runtime.marshal.CoreObjectType -> org.jruby.RubyBasicObject
[2021-11-12T11:06:38,834][DEBUG][org.reflections.Reflections] expanded subtype org.jruby.runtime.builtin.InstanceVariables -> org.jruby.RubyBasicObject
[2021-11-12T11:06:38,834][DEBUG][org.reflections.Reflections] expanded subtype org.jruby.runtime.builtin.InternalVariables -> org.jruby.RubyBasicObject
[2021-11-12T11:06:38,834][DEBUG][org.reflections.Reflections] expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Output
[2021-11-12T11:06:38,835][DEBUG][org.reflections.Reflections] expanded subtype co.elastic.logstash.api.Metric -> co.elastic.logstash.api.NamespacedMetric
[2021-11-12T11:06:38,835][DEBUG][org.reflections.Reflections] expanded subtype java.security.SecureClassLoader -> java.net.URLClassLoader
[2021-11-12T11:06:38,835][DEBUG][org.reflections.Reflections] expanded subtype java.lang.ClassLoader -> java.security.SecureClassLoader
[2021-11-12T11:06:38,835][DEBUG][org.reflections.Reflections] expanded subtype java.io.Closeable -> java.net.URLClassLoader
[2021-11-12T11:06:38,835][DEBUG][org.reflections.Reflections] expanded subtype java.lang.AutoCloseable -> java.io.Closeable
[2021-11-12T11:06:38,835][DEBUG][org.reflections.Reflections] expanded subtype java.lang.Comparable -> java.lang.Enum
[2021-11-12T11:06:38,835][DEBUG][org.reflections.Reflections] expanded subtype java.io.Serializable -> java.lang.Enum
[2021-11-12T11:06:38,835][DEBUG][org.reflections.Reflections] expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Filter
[2021-11-12T11:06:38,855][DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to exists or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore
[2021-11-12T11:06:39,233][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"udp", :type=>"input", :class=>LogStash::Inputs::Udp}
[2021-11-12T11:06:39,269][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"plain", :type=>"codec", :class=>LogStash::Codecs::Plain}
[2021-11-12T11:06:39,324][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@id = "plain_932e7631-af6e-493c-89f9-6a018104015f"
[2021-11-12T11:06:39,325][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@enable_metric = true
[2021-11-12T11:06:39,325][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@charset = "UTF-8"
[2021-11-12T11:06:39,339][DEBUG][logstash.inputs.udp      ] config LogStash::Inputs::Udp/@type = "syslog"
[2021-11-12T11:06:39,339][DEBUG][logstash.inputs.udp      ] config LogStash::Inputs::Udp/@port = 5144
[2021-11-12T11:06:39,339][DEBUG][logstash.inputs.udp      ] config LogStash::Inputs::Udp/@id = "f044f0a6df1315f16877c38bc38258d7ef1f3f8c664abc1c264c9ab18f7904fa"
[2021-11-12T11:06:39,339][DEBUG][logstash.inputs.udp      ] config LogStash::Inputs::Udp/@enable_metric = true
[2021-11-12T11:06:39,342][DEBUG][logstash.inputs.udp      ] config LogStash::Inputs::Udp/@codec = <LogStash::Codecs::Plain id=>"plain_932e7631-af6e-493c-89f9-6a018104015f", enable_metric=>true, charset=>"UTF-8">
[2021-11-12T11:06:39,342][DEBUG][logstash.inputs.udp      ] config LogStash::Inputs::Udp/@add_field = {}
[2021-11-12T11:06:39,342][DEBUG][logstash.inputs.udp      ] config LogStash::Inputs::Udp/@host = "0.0.0.0"
[2021-11-12T11:06:39,343][DEBUG][logstash.inputs.udp      ] config LogStash::Inputs::Udp/@buffer_size = 65536
[2021-11-12T11:06:39,343][DEBUG][logstash.inputs.udp      ] config LogStash::Inputs::Udp/@workers = 2
[2021-11-12T11:06:39,343][DEBUG][logstash.inputs.udp      ] config LogStash::Inputs::Udp/@queue_size = 2000
[2021-11-12T11:06:39,393][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"tcp", :type=>"input", :class=>LogStash::Inputs::Tcp}
[2021-11-12T11:06:39,401][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"line", :type=>"codec", :class=>LogStash::Codecs::Line}
[2021-11-12T11:06:39,406][DEBUG][logstash.codecs.line     ] config LogStash::Codecs::Line/@id = "line_59560cc5-937d-43f8-944b-3cefca64eee0"
[2021-11-12T11:06:39,406][DEBUG][logstash.codecs.line     ] config LogStash::Codecs::Line/@enable_metric = true
[2021-11-12T11:06:39,406][DEBUG][logstash.codecs.line     ] config LogStash::Codecs::Line/@charset = "UTF-8"
[2021-11-12T11:06:39,406][DEBUG][logstash.codecs.line     ] config LogStash::Codecs::Line/@delimiter = "\n"
[2021-11-12T11:06:39,413][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@type = "syslog"
[2021-11-12T11:06:39,413][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@port = 5144
[2021-11-12T11:06:39,413][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@id = "417bb732803d9aa4e1d43fe64c324852fb9cded4e829544a1650518cb0dbb31d"
[2021-11-12T11:06:39,413][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@enable_metric = true
[2021-11-12T11:06:39,413][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@codec = <LogStash::Codecs::Line id=>"line_59560cc5-937d-43f8-944b-3cefca64eee0", enable_metric=>true, charset=>"UTF-8", delimiter=>"\n">
[2021-11-12T11:06:39,414][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@add_field = {}
[2021-11-12T11:06:39,414][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@host = "0.0.0.0"
[2021-11-12T11:06:39,414][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@mode = "server"
[2021-11-12T11:06:39,414][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@proxy_protocol = false
[2021-11-12T11:06:39,414][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@ssl_enable = false
[2021-11-12T11:06:39,414][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@ssl_verify = true
[2021-11-12T11:06:39,414][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@ssl_key_passphrase = <password>
[2021-11-12T11:06:39,415][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@ssl_extra_chain_certs = []
[2021-11-12T11:06:39,415][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@ssl_certificate_authorities = []
[2021-11-12T11:06:39,415][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@tcp_keep_alive = false
[2021-11-12T11:06:39,415][DEBUG][logstash.inputs.tcp      ] config LogStash::Inputs::Tcp/@dns_reverse_lookup_enabled = true
[2021-11-12T11:06:39,421][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"stdout", :type=>"output", :class=>LogStash::Outputs::Stdout}
[2021-11-12T11:06:39,428][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"rubydebug", :type=>"codec", :class=>LogStash::Codecs::RubyDebug}
[2021-11-12T11:06:39,435][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@id = "rubydebug_9b24f5a3-92c3-49a7-b7f7-d03071dcc1a0"
[2021-11-12T11:06:39,435][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@enable_metric = true
[2021-11-12T11:06:39,435][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@metadata = false

I'd look for the ones with WARN or ERROR instead of DEBUG.

I will keep looking for warn and errors in the log. Should I increase the logging in the .yml file to bring those to the forefront?

I have noticed that I am not getting the logging I was before I crashed my Logstash server.

I wouldn't have debug turned on initially if just looking for what's going on.

Ok well debug is disabled at the moment. I will look for anything that mentions error or warn.

The only warning message when I restart the logstash service is: Ignoring the 'pipelines.yml' file because modules or command line options are specified.

Then it successfully starts the API
Then it starts the specified pipeline.

checking the /var/log/messages for any other Warn or Error messages that pertain to Logstash

If you aren't getting any warnings/errors then I would question if data is being sent on those ports. Not sure what else to check.

I confirmed yesterday with the owner of the internal firewall and the ntp server that data is being sent. I also confirmed using tcpdump -n -v host host_ip_address that information was being sent and received on the logstash server.

Thank you for your help with troubleshooting this. It is appreciated.

1 Like

Okay I may be out of my depth. These are the things I usually check for.

Usually for syslog I would do something like this

input {
tcp {
port => 5144
type => syslog
}
udp {
port => 5144
type => syslog
}
}

output {
stdout { codec => rubydebug }
}

the codec=>rubydebug would parse the data somewhat so that you can see what is happening to the data.

To actually see it properly the output, you need to stop your logstash service and do this (assuming you are using linux, otherwise find the logstash executable and replace the path).
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/config_file.conf

This will give you a lot of details and show each line of data ONLY for that config file.
It will show you the incoming syslog line and the corresponding parsing of the line to be inserted into ES.

--config.test_and_exit is only for syntax checks but not for running in realtime. It doesn't tell you what happens if you have the syslog data coming in and is it being processed at all. This is especially useful when you start creating the filter { } section.

If you don't see any incoming lines when you are running with -f then you have a firewall issue.

Secondly, you need to be careful of /etc/logstash/pipeline.yml
The default is

  • pipeline.id: main
    path.config: "/etc/logstash/conf.d/*.conf"

This means ALL the conf files are active and can cascade data between each conf file.
Which can be a big mess.
I usually comment those lines out and do something like this

  • pipeline.id: process1
    path.config: "/etc/logstash/conf.d/process1.conf"
  • pipeline.id: process2
    path.config: "/etc/logstash/conf.d/process2.conf"

That way you don't have data that is read in process1.conf cascade into process2.conf if you use the default config.

Regards,

Michael

1 Like

Michael;

Thank you for the detailed response. That is going to help me big time. After I go through all the steps I will let you know if I am any further ahead.

@michaelv
So after changing my output to the stdout { codec => rubydebug }. Stopping the logstash service and running the test for the conf file as you have directed I get to the below output in my Linux console.

[INFO ] 2021-11-15 08:00:53.087 [Converge PipelineAction::Create<main>] Reflections - Reflections took 24 ms to scan 1 urls, producing 24 keys and 48 values
[WARN ] 2021-11-15 08:00:53.588 [Converge PipelineAction::Create<main>] udp - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2021-11-15 08:00:53.821 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/logstash-sample.conf"], :thread=>"#<Thread:0x6fecf4c8 run>"}
[INFO ] 2021-11-15 08:00:54.298 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.47}
[INFO ] 2021-11-15 08:00:54.394 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2021-11-15 08:00:54.401 [[main]<tcp] tcp - Starting tcp input listener {:address=>"0.0.0.0:5144", :ssl_enable=>false}
[INFO ] 2021-11-15 08:00:54.449 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2021-11-15 08:00:54.489 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:5144"}
[INFO ] 2021-11-15 08:00:54.519 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:5144", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}

And it stays there, until I stop the command with a ctrl + c. Does that mean I am looking at a firewall issue on my Logstash server or going to the Elasticsearch server?

It means that Logstash is not reading any data on that port. Could be firewall, port blocked for user, no data flowing, etc.

If your output is stdout and not elasticsearch then that takes elasticsearch server out of the question.

Those last 2 log entries basically say that Logstash is listening on port 5144 from any IP (0.0.0.0). If data does come through that port then you would see records of data flowing on the screen below that.

Alright, I understand that.
So as a test I opened my Kibana and sent an echo to the Logstash and I see that come through in my Logstash command line, while running the /usr/share/logstash -f /etc/logstash/conf.d/logstash.conf file command. And that worked I know what I should see.

That is now telling me that my original configuration is good, and I need to get in contact with my network group or owner of the systems that should be sending the syslogs and ensure that there is nothing in the way on their part.

Am I correct or off base? I think I am correct, just wanting to make sure.

If you manually sent a message from another server to Logstash server and it read and processed it then I agree, your original configuration is most likely correct and to contact network owner of who is sending you the syslogs and troubleshoot with them.

I think the issue comes down to my using a higher port, due to the fact that I am NOT root on the linux machines, but sudo root. We are going to try raising the port used by the source systems and see if that makes the difference.

And that is what my entire problem was.

Thank you @aaron-nimocks ; @michaelv and everyone that helped me.

1 Like

@MKirby How did you try to redirect the privileged port to a higher port?

You should be able to do it with something like:

sudo iptables -t nat -A PREROUTING -i <NAME_OF_INTERFACE> -p udp --dport 514 -j REDIRECT --to-port 5144