My ELK server failed 2 days again with an issue in Elasticsearch for 'Too Many Files Open' I managed to resolved this moving out old indexes.
Now I can access Kibana, but im not seeing any new indexes being created nor any data being received.
I have received the pipeline blocked error in my logstash log files, so following information I increased my congestion_threashold to 400 (large number to stop the circuit breaker)
Im not seeing anything being indexed and receive message=>"retrying failed action with response code: 503", :level=>:warn} in the logs
I plan to do this after I can get the server working again. I have had this working with more shards so I should be able to get it to a workable state first.
Ok so I have got a little further... I shutdown ELK, moved out some indexes from around the time the system crashed and restarted. I got a burst of data into the system. The my OSSEC servers received a connection refused error.
I have worked out I can get the data through with series of service restarts that flush the data through the pipe, logstash services followed by filebeat.
This is not ideal, but hopefully as the data catches up logstash will be ok to handle this realtime
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.