Find bottleneck when use elastic for write large amount of data

We want to use elastic for a system with large amount of data.
now we index 60k/s data in our single node elasticsearch.
I want to understand what is the max performance we can reach in single node. and which parameters I should check to find resource bottlenecks. after we reach max performacne in single node, we decide to scale out our cluster.

so how can I investigate parameters for increase performance and find bottlnecks?

  • elasticsearch version : 5.6
  • java api with bulk load and 10 threads
  • 1 index in elasticsearch with 5 shard and 0 replicas
  • swapping is disable
  • custom mapping (disable _all , some filed has index:false)
  • heap is 32 GB
  • total Memory : 64 GB
  • 32 cores cpu

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.