There has been a few debugging processes around ingesting CEF formated logs using logstash. While everything works just fine for us, it is impossible to push logstash to ingest more than 2.5k EPS.
The logstash itself is not under heavy load, it has 6 workers and 16GB of ram, and while ingesting logs we are never able to reach higher EPS.
Anyone else experienced this? Even if we do file ingestion, the ingestion caps at 2.5k EPS.
It is sent to one 20 node elastic cluster, this being the only logsource for the elastic cluster + the only destination for the logstash cluster.
Elastic having more than enough resources left to crunch more data
All running newest versions (es 6.4 etc)
This never changed even when the cluster was brand new. So let's say a index with 12 shards, over 20 nodes, all on SSD, was not able to handle more than 2.5k EPS when the CEF input was specified, without it, then it's faster.
I have never seen ingestion cap at a hard point either, normally during benchmarking it is quite up and down, yet here it stays consistent on 2.5k. No throttling done on network connectivity or the log source itself either.
Our log sources receives for example 3.5k EPS, sends 3.5k EPS, while logstash filters and outputs 2.5k, no dropped events either.
I know a support ticket is the best option, we already did this, quite some extensive testing, and it ended with me not being able to allocate enough time for more debugging. So i thought while i try to allocate more time to pick up the debugging again, hopefully others have seen this as well.
@Christian_Dahlqvist I also remember during our last debugging session, we ingested a cef file locally from the logstash, and had only one output also to logfile, meaning the destination was not the cap, and even then we are getting 2.5k EPS..
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.