I would like to know the metrics about Logstash and Kibana.
How much throughput can tolerate by Logstash ? means, how much data it can receive from redis or any shiper? , How much data it can send to the elasticsearch in a second? how much data it can store?
what is the queue size, Buffer size of Logstash?
what is the throughput for Elasticsearch?
How much data it can take per second?
How much data it can send to the Kibana in a second?
How can Physical hardware can effect the throughput of Elasticsearch and Logstash?
I am failed to found all these metrics from Elasticsearch documents. can anybody help me to found the answer?
Do you want to measure these metrics on your own system? Because it almost sounds like you're looking for numbers for someone else's system, but that's not a very useful question to ask since all such numbers depend on a multitude of factors.
what is the queue size, Buffer size of Logstash?
Zero, basically. These is a small 20-item buffer but that's all.
So could you please tell me, How much throughput my Lagstash and Elasticsearch can accommodate respectively?
No, I can't. Because it depends on the size of the events, how they're analyzed, your I/O performance, the kind of CPU, the JVM, the operating system, the total amount of data you need to store, the number of replicas, the required query latency, the query frequency, the kind of queries, ...
Start small, measure your actual performance, ramp up the amount of data, and be prepared to scale up and/or out.
The metrics filter can help out. However, since Logstash has no buffer to speak of you can also simply measure the input rate (since it's going to be the same as the output rate). For example, feed Logstash data via the stdin input and measure how long the Logstash process is running.
But again, the maximum input rate isn't the only interesting metric.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.