I have a data sender, a logstash and two elastic node(one is ingest node and another is data node).
When I use data sender to sen huge data to logstash, logstash will be hang, but have no error log. And my data sender will be crash, originally, I think the reason is having more filter in logstash config, so i remove some filter plugin and using elastic ingest node instead of logstash to implement the logstash filter function. But it doesn't resolve my problem.
If I remove the elastic output plugin and then using file output plugin, the data will not be blocked.
So i think the block neck is the elastic plugin.
So do you have any opinion to improve the performance of elastic plugin of logstash?
I doubt it is an issue with the elasticsearch output plugin, it is much more likely that your Elasticsearch cluster is not able to handle what you are throwing at it. Logstash can only output as much data as Elasticsearch is able to handle. What is the specification of your Elasticsearch cluster? What does CPU usage and disk I/O and iowait look like when you are having problems?
Elastic Ingest node has 4G JVM, Elastic Data node has 16G JVM. They are in different servers. Both of servers have 96G mem 32G cpu cores . The refresh_interval is 180 seconds for each indices.
We want to send 700K events in TCP connection in one minutes.
The CPU usage will be 70% to 80%
Elasticsearch is often limited by I/O performance. What type of storage do you have (local SSDs, spinning disks, SAN)? What does
iostat show while you are indexing? Have you followed these guidelines?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.