Maximize read/write throughput

We are performing load test on elasticsearch cluster and trying to maximize read/write throughput.

Here are the details of our elasticsearch cluster:

  • Master nodes : 3
  • Data nodes : 6
  • Indexes : 1
  • Primary Shards : 2
  • No of Replicas : 2

Data Nodes Hardware Configuration:

  • CPU : 16 cores (1:1 vcpu commit)
  • RAM : 112 GB
  • Network : 1 GBps
  • Disk Size : 1 TB (SSD)
  • Throughput per disk : 200 MB/s (We are using the P30 Azure Premium SSD)

Master Nodes Hardware Configuration:

  • CPU : 8 cores (1:1 vcpu commit)
  • RAM : 28 GB
  • Network: 1 GBps

Client topology :

  • Client nodes: 12
  • Write threads per node: 20
  • Read threads per node: 40

Hardware configuration for nodes in ES Client Nodes:

  • CPU : 16 cores (1:1 vcpu commit)
  • RAM : 112 GB
  • Network : 1 GBps

Bulk Processor settings:

  • Average document size : 1 KB
  • Concurrent Requests : 0
  • Bulk Actions: -1
  • Bulk Size: 15 MB
  • Back Off Policy: Constant Back Off with a delay of 2 seconds and 3 maximum retries
  • Flush Interval: 5 seconds

Is there a question related to this? Are you facing any issues?