Indexing at 2 million events/s?

I restarted 1 elasticsearch node.After restart data nodes started transfering among themselves.
And it was 2 million events/s.while my todays index is indexing at 12 k events/s.The buffer is filled on logstash machine.


Can my Elastic search support more than 12k events/s?

That depends on a lot of factors, e.g. document size and complexity, mappings, index settings, bulk size and hardware used. Without context an index rate is hard to judge.

Does reducing number of fields indexed reduce size of index and speed of ingestion.
having 2 logstash sending to co-ordinating node or 1 logstash machine sending.How does performance change.
Can you give general idea about these factors.
When i index all fields my index size is 4 times the data that i sent.I need general tips

Have a look at the following:

https://www.elastic.co/guide/en/elasticsearch/reference/7.3/tune-for-disk-usage.html

https://www.elastic.co/guide/en/elasticsearch/reference/7.3/tune-for-indexing-speed.html

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.