I am curious if someone can assist me. It appears there's an issue with either the latest Logstash or perhaps Elasticsearch can't handle the traffic, which I don't find to be true. I am seeing the logs being fed into Elasticsearch, but after a while...it seems to no longer want to work. This, I believe, started to happen after Logstash was upgraded on my ELK server. I am not 100% sure...just a theory. We had been using Logstash version 2.2.3 and everything had been working great for at least a month or two before things started to not work.
Logstash: 2.3.1
Elasticsearch: 2.3.4
Redis: 3.2.1
ELK Server Specs:
CPU: 40 processors @ 10 cores each
RAM: 125G
DISK: 1.1T (61% Used)
So I don't think my server is the issue.
Here's how it's sending traffic:
It's a pretty standard layout. I have my two sensors sending the BRO logs using Logstash to Redis on the ELK server. Logstash on the ELK server grabs those logs and sends them to Elasticsearch.
Redis shows that it's queuing up the traffic for both the "keys" that each sensor is feeding it, and ready for Logstash to send it to Elasticsearch. I can sometimes restart Logstash on the ELK server and it'll start to process the queue from Redis, but then after a while...it'll stop and give the following message:
It'll then stop and continue to display those messages for a little while. If I let it sit there, throughout the day it'll continue to process more data before it stops and displays the same message.
I am not sure if Java is the issue or perhaps the change in Logstash/Elasticsearch caused this. Anyone have any suggestions?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.