We have getting the huge logs when application team doing the Performance testing. At that time, Our cluster is not stable due to unable to handle the huge much of load. Could you please help us to control the log flow in logstash config or whether it is possible to control the flow in logstash config filter part section ?
Logstash version - 7.15
Logs Flow Path: FileBeat/MetricBeat -> AWS Kafka -> Logstash -> ES -> Kibana
This is the Logs Flow - FileBeat/MetricBeat -> AWS Kafka -> Logstash -> ES -> Kibana to visualize the logs in kibana.
During performance testing, we receiving many log files at a moment. We are facing logs lagging in our environment. It takes too much time to reach the normal flow. So, our concern is, we have to control the log flows through Logstash, while application team doing performance testing !
We would like to know if we can control the log flow at Logstash - filter part from Logstash config file. could you please guide us to proceed further on this part ?
You could use the drop filter to drop events from those performance tests, but you would need to have something to filter so you won't drop the normal events.
You also have the throttle filter that you can use.
But you need to have some way to filter on the events created by the performance tests, otherwise those filters will apply to all events.
Suppose, If application team pushing the 1k of logs while doing performance testing. In the meantime, kibana trying to receiving the whole 1k of logs. but it does not received the whole logs due to not able to handle the huge much of load at that time.
So, In that case, We need to get the logs like 10mins some count of logs and another 10mins some count of logs and this is the way, we need to controlling the logs while doing performance testing.
The above given logs document sizes are just for an example only.
Could you please tell us whether it possible to control the log flow through logstash filter ?
It is possible, check my previous answer, it has a link to the list of logstash filters.
In your case you can use the throttle filter, the documentation has some examples.
But as I said, you need to be able to use a if conditional to separate the logs from the performance test from the real logs, or else the filter will apply to every log.
We tried to controlling the log flow with the given filter. But we are unable to receive the logs. Please tell us, where we need to implement the changes in filter part.
But we exactly don't know, where we need to put those throttle filter in our logstash filtering part. Could you please confirm us, is that below filtering method is fine or do we need to change the lines.
But we exactly don't know, where we need to put those throttle filter in our logstash filtering part. Could you please confirm us, is that below filtering method is fine or do we need to change the lines.
We applied that throttle filter in logstash config, the one thing we could able to find out that is, Suppose the application team sending only two events of logs that we can easily find out they only sending the two events by using below filter config. But I don't think so this could be control the log flow of huge logs.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.