Stack Details:
ElasticSearch/logstash/kibana/X-pack Version : 6.2.3
Cluster Details :
3 Master node , 4 Hot data node , 1 Warm data node
Master Node Conf :
Master Node :
cluster.name:ELK
node.name: ELK-ES7
path.data: /elasticsearch/data
path.logs: /elasticsearch/logs
bootstrap.memory_lock: true
network.host: 1.4.5.6
node.data: false
node.master: true
discovery.zen.ping.unicast.hosts: ["1.2.3.4", "4.3.2.1", "5.6.7.8"]
discovery.zen.minimum_master_nodes: 2
xpack.security.enabled: false
node.ingest: false
action.destructive_requires_name: true
thread_pool:
index:
size: 4
queue_size: 2000
thread_pool.bulk.queue_size: 5000
bootstrap.system_call_filter: false
Data Node:
cluster.name: ELK
node.name: ELK-ES1-new
path.data: /elasticsearch/data
path.logs: /elasticsearch/logs
bootstrap.memory_lock: true
network.host: 1.9.7.8
node.data: true
node.master: false
discovery.zen.ping.unicast.hosts: ["1.2.3.4", "4.3.2.1", "5.6.7.8"]
discovery.zen.minimum_master_nodes: 2
xpack.security.enabled: false
action.destructive_requires_name: true
thread_pool:
index:
size: 12
queue_size: 2000
thread_pool.bulk.queue_size: 7000
bootstrap.system_call_filter: false
node.attr.box_type: hot
System Details:
32cores and 64Gb Ram with 3.2Tb SSD : Data node
32cores and 64Gb Ram with 50TB Spinning Disk : Data node
Issue Details :
-
Index Size are increasing in gap of days , we can see 2x/3x size of index every alternate days this is causing disk watermark , have checked there is no increase in log size as well still index size are increasing day by day.
-
There is difference between size of Primary shards and replica shards for the same index this is also causing disk watermark and eventually leading to read only mode
-
Have also observed highly unequal shard sizes within an index.