Hi
Why from dev tool console I saw the diffrent value then I have from docker stats and deployed cluster
GET /_cat/nodes?v=true&h=name,node.role=c,heap*
name heap.current heap.percent heap.max
es_data_hdd_5_3 3.2gb 81 4gb
es_data_hdd_7_3 3.1gb 78 4gb
es_data_ssd_1_1 6.5gb 82 8gb
es_data_hdd_1_3 3.1gb 79 4gb
es_master_2_1 4.9gb 61 8gb
es_data_ssd_3_3 6.7gb 84 8gb
es_data_ssd_3_1 6.6gb 83 8gb
es_master_1_3 3.9gb 49 8gb
es_data_hdd_6_1 3gb 75 4gb
es_data_ssd_4_3 6.7gb 84 8gb
es_data_ssd_1_3 7gb 87 8gb
es_data_hdd_9_1 3gb 77 4gb
es_data_ssd_1_2 7gb 87 8gb
es_data_hdd_1_1 3.3gb 83 4gb
es_data_hdd_7_1 3.2gb 81 4gb
es_data_ssd_5_1 7.3gb 92 8gb
es_data_ssd_2_2 7.5gb 93 8gb
es_data_hdd_4_1 3gb 77 4gb
es_master_1_2 2.1gb 26 8gb
es_data_hdd_9_3 2.6gb 66 4gb
es_data_hdd_3_3 3gb 76 4gb
es_data_hdd_7_2 3gb 77 4gb
es_data_hdd_3_2 3.2gb 82 4gb
es_data_hdd_6_2 3.1gb 78 4gb
es_data_hdd_6_3 3.3gb 82 4gb
es_data_hdd_3_1 3.2gb 81 4gb
es_data_ssd_5_3 6.6gb 83 8gb
es_data_hdd_4_2 3.4gb 85 4gb
es_data_hdd_2_3 3.3gb 83 4gb
es_data_ssd_3_3_ingest 1gb 25 4gb
es_data_hdd_4_3 2.9gb 74 4gb
es_data_hdd_2_1 3.2gb 81 4gb
es_master_1_1 3.9gb 49 8gb
es_data_hdd_8_3 2.3gb 59 4gb
es_data_ssd_2_3 6.7gb 84 8gb
es_data_ssd_3_1_ingest 1.5gb 39 4gb
es_data_ssd_4_1 7.2gb 90 8gb
es_data_ssd_4_2 7.1gb 89 8gb
es_data_hdd_5_2 2.9gb 73 4gb
es_data_ssd_5_2 6.9gb 87 8gb
es_data_hdd_1_2 3.2gb 81 4gb
es_data_hdd_5_1 2.9gb 74 4gb
es_data_hdd_2_2 3.2gb 80 4gb
es_data_hdd_9_2 3gb 75 4gb
es_data_ssd_3_2 6.9gb 86 8gb
es_master_2_2 3.7gb 47 8gb
es_master_2_3 5.3gb 67 8gb
es_data_ssd_3_2_ingest 2.7gb 68 4gb
es_data_ssd_2_1 6.8gb 85 8gb
es_data_hdd_8_1 2.5gb 64 4gb
es_data_hdd_8_2 2.7gb 68 4gb
but in reality some of above nodes have a different configuration
8b97ff511267 elk_cluster_es_data_hdd_7_1.p27vjq8f4kuoukuokn1gov8b3.vc69yp0nssx0nrpr44gmpk578 6.55% 6.792GiB / 8GiB 84.90% 4.34TB / 465GB 480GB / 905GB 381
2f3c1c355518 elk_cluster_es_data_ssd_3_1.p27vjq8f4kuoukuokn1gov8b3.in6226uwtu065xld6b4hwa45a 1160.25% 12.76GiB / 16GiB 79.78% 9.98TB / 13.1TB 62.3TB / 17.3TB 480
66a6f80a2dff elk_cluster_es_data_hdd_5_1.p27vjq8f4kuoukuokn1gov8b3.jh8ultg35b6fzklkwilp4x200 1.25% 6.715GiB / 8GiB 83.94% 4.74TB / 420GB 378GB / 1.03TB 411
b0cfb2746d03 elk_cluster_es_data_ssd_5_1.p27vjq8f4kuoukuokn1gov8b3.0507i3est41tp3b41shhquyv8 952.70% 12.68GiB / 16GiB 79.23% 8.1TB / 14.7TB 49.6TB / 16TB 458
06216051ccb6 elk_cluster_es_data_ssd_3_1_ingest.p27vjq8f4kuoukuokn1gov8b3.wj8x97b2xfdw6iz6js5m11v3n 1.55% 5.184GiB / 8GiB 64.80% 49.9GB / 84GB 4.58MB / 812MB 225
4e381db0a6f1 elk_cluster_es_data_hdd_3_1.p27vjq8f4kuoukuokn1gov8b3.s8s2e0sv5wzydjq3jfcgrtxp1 6.02% 6.816GiB / 8GiB 85.20% 4.18TB / 409GB 400GB / 903GB 421
032cc6bb3888 elk_cluster_es_data_hdd_1_1.p27vjq8f4kuoukuokn1gov8b3.3f55p0ywhlto3xlk3bxzbawim 0.93% 6.786GiB / 8GiB 84.83% 4.38TB / 679GB 490GB / 962GB 382
5fc14fe7ce29 elk_cluster_es_master_1_1.p27vjq8f4kuoukuokn1gov8b3.j6s5s34yyf0gyrqkz3qx75x9u 53.91% 10.3GiB / 16GiB 64.36% 88.9TB / 24.7TB 88.4MB / 12.1GB 383
35adca19c043 elk_cluster_es_data_hdd_9_1.p27vjq8f4kuoukuokn1gov8b3.nfyyzuixdjoks0trfqqf68ljm 1.93% 6.692GiB / 8GiB 83.66% 3.78TB / 683GB 616GB / 824GB 389
355e03341d6f elk_cluster_es_master_2_1.p27vjq8f4kuoukuokn1gov8b3.xgobz12pe4uefpq9seyqjaedf 0.38% 9.63GiB / 16GiB 60.19% 1.69TB / 421GB 32MB / 12GB 246
34629ac77448 elk_cluster_es_data_ssd_1_1.p27vjq8f4kuoukuokn1gov8b3.zlll2wqgx5gnoem9jmelw75gs 938.90% 12.79GiB / 16GiB 79.92% 9.88TB / 10.2TB 54.5TB / 16TB 481
fc670f95e343 elk_cluster_es_data_hdd_2_1.p27vjq8f4kuoukuokn1gov8b3.plbrj7k0x9dt4nfc4qo5unkjg 1.11% 6.761GiB / 8GiB 84.51% 4.14TB / 339GB 316GB / 915GB 388
d3acc4b336e5 elk_cluster_es_data_ssd_2_1.p27vjq8f4kuoukuokn1gov8b3.wu0frytfzjqisat37vg8hq0dd 278.09% 12.3GiB / 16GiB 76.87% 8.82TB / 13.5TB 52.6TB / 15.8TB 493
978a19b0b8f4 elk_cluster_es_data_hdd_4_1.p27vjq8f4kuoukuokn1gov8b3.tupz4bdagv6ep5z9mqey7asf6 4.78% 6.822GiB / 8GiB 85.27% 4.75TB / 529GB 545GB / 1.05TB 386
0ad87ae29c70 elk_cluster_es_data_hdd_8_1.p27vjq8f4kuoukuokn1gov8b3.n5hvnl7ktvehn65fggggqi2xw 1.58% 6.74GiB / 8GiB 84.26% 3.68TB / 527GB 431GB / 765GB 386
8bc06f3981d6 elk_cluster_es_data_hdd_6_1.p27vjq8f4kuoukuokn1gov8b3.w66hh44os7jxr27gpk7mnwrj6 1.11% 6.797GiB / 8GiB 84.96% 4.18TB / 406GB 349GB / 902GB 389
3ceaa3586fe0 elk_cluster_es_data_ssd_4_1.p27vjq8f4kuoukuokn1gov8b3.3wcoqzvft4i7j7lo7kcalcwt6 25.75% 12.65GiB / 16GiB 79.09% 10.3TB / 11.3TB 61.1TB / 17.2TB 495
Please, can You explain me?
My question and issue was indented that I got many of circuit_breaking_exception
:error=>{"type"=>"circuit_breaking_exception", "reason"=>"[parent] Data too large, data for [indices:data/write/bulk[s]] would be [8448556430/7.8gb], which is larger than the limit of [8160437862/7.5gb], real usage: [8448288200/7.8gb], new bytes reserved: [268230/261.9kb], usages [request=8028272/7.6mb, fielddata=2 38005396/226.9mb, eql_sequence=0/0b, model_inference=0/0b, inflight_requests=282664/276kb]", "bytes_wanted"=>8448556430, "bytes_limit"=>8160437862, "durability"=>"PERMANENT"}}
[INFO ] 2022-12-19 09:47:05.453 [[hlr91]>worker0] elasticsearch - Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>21}