Performance issue in ELK stack


I have established an ELK stack with 3 shipper logstash (for Linux, Windows, Network Devices data each) , with kafka cluster (3 topics each for linux, windows and network devices) , with 3 logstash indexer (each for linux, windows and network devices respectively) , then , a cluster of elasticsearch and a kibana server.

I began with sending data to each logstash shipper from 3 different local machines with very low events per second.It was working good. Then I tested it on a huge amount of data. This was working fine for linux and network devices (I was not getting windows data) until when I restarted the logstash indexer (after applying the parser)

I found a huge delay in receiving the data (which is increasing) in both. And still I was not receiving windows.

Moreover, I have tried these same amount separately on these shippers and it was working fine.

Is that the kafka is not able to send the data simultaneously to all the 3 logstash indexer?

Does Logstash give such delays at each restart after applying the gork parser?

I am not able to find out where is the performance issue exactly. I am receiving the data till logstash shipper.

Could anyone help me on this.

Look into the bottlenecks by simlifying your system. How well can each component perform in isoltion? How many events per second can the Logstash instance with your filters process if you e.g. only have file inputs and file outputs? And so on.

Also, please avoid non-quantifiable terms like "huge" and "low". I have no idea what you consider to be huge amount of events so I have no idea of whether your performance problems are reasonable.