We have been running ELK v. 6.2.x
We have a central syslog server on EL5 pushing log files with filebeat to logstash
I have been tasked with setting up a new central syslog server on EL7 (replace EL5 instance).
Filebeat-6.2.2 is the current version on the EL5 host, matched filebeat version on EL7 host.
when testing the output its all ok.
parse host... OK
dns lookup... OK
addresses.... 10.10.3.19
dial up... OK
TLS... WARN
talk to server... OK
What I am not getting are the latest syslog details displaying in Kibana.
Please tell me how I can verify the syslog data is getting to Logstash? Elasticsearch?
Any advice would be much appreciated. TYIA!
Please let me know if there are details you want to know about the environment.
Sorry, I am an elk-noob.
Brian
Thank you for your time!
Yes I get a connection established when I telnet.
when I run the filebeat test output it shows OK for connection.
There is connection established in the logs, but when it tries to publish events the connection is reset by peer. 2019-08-26T16:37:10.167-0500 INFO pipeline/output.go:105 Connection to backoff(async(tcp://slp-b3c.its.bethel.edu:5044)) established
Today I see a slightly different fail message 2019-08-27T09:53:25.316-0500 ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 10.1.38.21:47796->10.1.38.19:3514: i/o timeout
Can you share your logstash beats input configuration? Anything in the logs for logstash?
The errors indicate an issue between filebeat and logstash so it could be an issue with the connectivity from filebeat to logstash (your logs show successful connection), the output plugin from filebeat (looks fine to me and logs show connection established), or the input plugin from logstash.
The successful connection from telnet shows that logstash is up and listening on that port, but not sure if you have more advanced configuration on the input plugin.
Thanks for the config info Brian, I understand your setup better now.
Did you say this configuration was working previously? I'm not an elastic team member, just a dude who setup ELK at my workplace, so apologies if I'm wrong but as far as I understand it, you want your output to match up with your input plugin from logstash.
If you follow the link above, setup a beats input on your logstash and keep your filebeat configuration the same, you should definitely be able to see events coming into logstash.
The current inputs you are using are UDP and TCP input plugins and would not affect your filebeat log events at all.
Could be part of an old setup? If you get syslog configured on your source log boxes, and configure it to send to your logstash server on that specific port, 3514, you should then see your syslog logs but I have no experience setting up syslog so I can't confirm this.
Thanks for your help justkind! We do have an existing syslog server sending with filebeats - I noticed the ports did not match up, so I will dig deeper on that. Certainly stood out as not correct.
Happy to help Brian, hopefully we can figure out what's going on.
We just want to line up logstash's input with whatever is being sent out from your log servers, whether it is filebeat log events (beats input), or generic events (your TCP and UDP inputs).
The logs for logstash you posted seem to show that it is properly setting up your TCP and UDP listeners on port 3514 and also has a beats input configured on port 5044.
If you can, double check your syslogs is sending to the correct logstash on the correct port.
For the filebeat log events, it seems there is an existing beats input configured somewhere so maybe try to look for that and see if there are any errors in the configuration.
my beats version changed from 6.2.2 to 6.8.2
after putting the version back, I am no longer seeing any ERROR in the log for filebeat.
Still not seeing anything show in Kibana....
I've restarted logstash
I've refreshed the index on Elastic heads with curl -XPOST 'http://localhost:9200/syslog-*/_refresh'
On my old syslog server I am using the same filebeat.yml contents for the new syslog server.
Syslog data is its own drive mounted under /var/logs
I use rsync on that partition to send data to the new syslog server.
Syslog is a different version... maybe that is my problem?
Hmm interesting that there were errors for the newer filebeats version, according to the compatibility matrix, all filebeat and logstash versions between 5.6.x-6.8.x should be fine.
With no errors it's going to be hard to find the issue but we need to trace the flow of traffic.
Could you clarify on the full flow? From the configuration you've shared, it appears to be:
Filebeat -> Logstash -> Kafka -> ? -> Elasticsearch -> Kibana
Do you happen to have another Logstash between Kafka and Elasticsearch?
Can you check anything on Kafka? Are indices populating in Elasticsearch? You can check on this by curl'ing your Elasticsearch with /_cat/indices?v?
We're just trying to figure out where traffic is stopping. If you have monitoring enabled, you should be able to see if logstash is processing events.
I might switch back to the newer 6.8.2 because it did provide more detail with error's...
[(1)Syslog -> Filebeat] -> [(2)Logstash -> Kafka -> Zookeeper] -> [(2)Elasticsearch] -> [(1)Kibana] (According to our documentation)
1 server for syslog & filebeat
2 servers for logstash, kafka and zookeeper - HA PROXY
2 servers for Elasticsearch - HA PROXY
1 server for Kibana
Please be patient with me as I am still a noob when it comes to Elastic and friends.
I can check logs in kafka, never looked at kafka before
when I curl Elasticsearch I do get results - after removing the v? curl -XGET 'http://slp-pkf.my.com:9200/_cat/indices?' | grep syslog-2019.08.28
Thanks for the information Brian, and no worries at all about experience level with tools. There's always someone more experienced/knowledgeable and also always someone with less knowledge/experience than you on any topic , I wouldn't call myself an expert either so would be great if someone on the @elastic team could chime in if they are reading this.
The ?v URL parameter just adds column names so it's not important, but you stated that you are getting results if you hit the indices endpoint and grep for the syslog index with today's date.
That should mean that you are indeed getting syslog data throughout your whole pipeline, if it's making its way throughout your entire flow to Elasticsearch.
If you are not seeing it in Kibana, can you ensure that you have the correct index pattern? It has to match the index names you are seeing from that /_cat/indices endpoint.
If you already have an existing index pattern and it is not showing up any data, maybe you need to refresh the field list.
You can do this by going to Management -> Index Patterns -> Click on your syslog index pattern -> Click on the refresh button (looks like 2 circling arrows) on the top right and check again.
Maybe delete the index pattern and create it again as a last resort.
You can also confirm your indices have data in it by curl'ing Elasticsearch in a similar way to getting the indices. Assuming your syslog index is named exactly what you put in your grep statement:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.