Syslog and filebeat assistance request

This thread might help a little bit. I skimmed your thread quickly but you might want to look at this as well for the indices issue. Logstash can not write date into elasticsearch after activating elastic security

Happy to help @Brian_Tima, hmm so unfortunately only data from the expiring syslog server but not the new one?

Sorry if I'm missing something, @Ryan_Downey, can you clarify on the indices issue that the post you linked relates to?
I don't think Brian is running into any indices issues. I just mentioned checking on indices in the last 2 posts to see if data is making its way through his pipeline but it's not the initial problem.

A quick summary is that a previously working setup ([(1)Syslog -> Filebeat] -> [(2)Logstash -> Kafka -> Zookeeper] -> [(2)Elasticsearch] -> [(1)Kibana]) where the syslog server is on EL5 is not working on a new setup using EL7.

I think @Brian_Tima, we confirmed your traffic flow and confirmed that there is nothing wrong with this part of the pipeline [Logstash -> Kafka -> Zookeeper -> Elasticsearch] (because we confirmed your old logs are making their way through), we've definitely isolated the problem to the new syslog server and its filebeat.

The problem is you stated earlier that it is using the same Filebeat version and the same filebeat.yml contents... So not sure what the issue could be. Did you happen to take a look into the different syslog version? Maybe a file permissions issue for the /var/logs folder, depending on the user that Filebeat is running under?

I have not been successful at finding an easy way to install the same version of syslog that is on the old server onto the new server.
The version on the old server is rsyslog-5.8.10-7.0.1.el5_11
The version on the new server is rsyslog-8.24.0-41.el7_7.x86_64

I am seeing filebeat possibly ship logs at this time... running filebeat in debug mode shows the [publish] event below. There are many many many [publish] events.

When I search kibana for a timestamp seen from my new server kibana reported this error
Courier Fetch: 1 of 23 shards failed.

I did delete the index and recreate it, still not seeing my new server as a beat.name in my results.

2019-08-30T10:46:23.554-0500	DEBUG	[publish]	pipeline/processor.go:309	Publish 
event: {
  "@timestamp": "2019-08-30T15:46:23.553Z",
  "@metadata": {
  "beat": "filebeat",
"type": "doc",
"version": "6.8.2"
},
"offset": 416900956,
"log": {
"file": {
  "path": "/var/log/messages"
}
},
"message": "Aug 30 10:36:27 1xx.8x.12x.xx named[18855]: client 1xx.1xx.1x4.1x#17997 (bsp-dsksrv-h5.my.com): query 'bsp-dsksrv-h5.my.com/A/IN' denied",
"prospector": {
"type": "log"
},
"input": {
"type": "log"
},
"beat": {
"name": "slp-xg9",
"hostname": "slp-xg9",
"version": "6.8.2"
},
"host": {
"name": "slp-xg9"
},
"source": "/var/log/messages"
}

My problem has been solved! Thank you @justkind @Ryan_Downey @stephenb for your input!

on the ingest-2-server (slp-3zp) I edited the file

/etc/logstash/conf.d/ingest 

by adding the following

output {
   if [beat][hostname] == "slp-xg9" {
     kafka {
       codec             => json
       topic_id          => syslog
       bootstrap_servers => 'slp-3zp.my.com:9092,slp-b3c.my.com:9092'
     }
   }

on the ingest-1-server (slp-b3c) I edited the file

/etc/logstash/conf.d/winlogbeats_ingest_pipeline.yml

with the same logic for the output

Also, from my syslog/filebeat server (slp-xg9) I commented out the output.logstash section and uncommented my info in the output.elasticsearch section in the filebeat config file

 /etc/filebeat/filebeat.yml

Then ran

./filebeat setup

Then changed my output back for output.logstash to

output.logstash:
  hosts: ["slp-b3c.my.com:5044"]

In Kibana, I deleted the syslog-* index pattern and recreated it.

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.