Logstash batch size query

Hi Team,

We have observed that we are missing logs.Currently we have 24 workers and batch size 125 and batch delay 50 ms.Heap size is 8 GB.Currently we are getting 300000 logs per second.Do we have to increase batch size to handle this.We are using 6.7.1 logstash version.Our elastic cluster is of size 7 nodes out of which 5 master/data nodes and 2 data nodes.Please let us know.

Regards,
Vivek

300000 documents per second sounds like quite a lot. Have you verified that your Elasticsearch cluster can handle this and is not the bottleneck? What is the size and complexity of your documents? What is the specification of your Elasticsearch nodes? What type of storage are you using? Local SSDs?

H Christian,

We are getting 1.7 TB day which includes 1 replica.We are using local disks as storage.Indexing rate on average 7000 events per sec.Each document has 50 fields.
Specification of Elasticsearch nodes are 32 GB Heap memory and 5 TB Harddisk each.

Regards,
Vivek

Is that 5TB of SSDs or spinning disks? If the latter, I would recommend this video.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.