I have multiple issues with Elasticsearch right now.
- I'm running out of disk
- I'm running out of memory
- Architecture may need to change since I have more than 600 shards per node. For sure, I will add a new node to the cluster
I'm not sure what should be the priority. And also I'm thinking that maybe some of these 3 issues caused the others.
GET /
{
"name" : "a",
"cluster_name" : "cluster_name",
"cluster_uuid" : "uuid",
"version" : {
"number" : "7.1.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "606a173",
"build_date" : "2019-05-16T00:43:15.323135Z",
"build_snapshot" : false,
"lucene_version" : "8.0.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
GET /_cat/allocation?v
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
605 16.4gb 257.9gb 37.1gb 295.1gb 87 ip-adress-b ip-address-b name-node-b
642 24.8gb 242.8gb 52.3gb 295.1gb 82 ip-address-a ip-address-a name-node-a
39 UNASSIGNED
GET /_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
ip-adress-a 63 94 9 0.62 0.90 0.95 mdi * name-node-a
ip-adress-b 48 98 2 0.63 0.35 0.29 mdi - name-node-b
GET /_cat/health
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1607443543 16:05:43 name yellow 2 2 1247 643 0 0 39 0 - 97.0%
GET /_cat/indices?v