Hi,
We have a one node cluster where JvmGcMonitorService warns about overhead like this:
The logfile:
[2018-10-02T13:03:24,173][WARN ][o.e.m.j.JvmGcMonitorService] [VzhkwYM] [gc][73354] overhead, spent [1.1s] collecting in the last [1.6s]
[2018-10-02T13:03:25,345][WARN ][o.e.m.j.JvmGcMonitorService] [VzhkwYM] [gc][young][73355][499] duration [1.1s], collections [1]/[1.1s], total [1.1s]/[7m], memory [4.2gb]->[4.8gb]/[7.8gb], all_pools {[young] [115.7mb]->[24.3mb]/[865.3mb]}{[survivor] [108.1mb]->[108.1mb]/[108.1mb]}{[old] [3.9gb]->[4.7gb]/[6.9gb]}
[2018-10-02T13:03:25,345][WARN ][o.e.m.j.JvmGcMonitorService] [VzhkwYM] [gc][73355] overhead, spent [1.1s] collecting in the last [1.1s]
[2018-10-02T13:03:26,673][WARN ][o.e.m.j.JvmGcMonitorService] [VzhkwYM] [gc][young][73356][500] duration [1.2s], collections [1]/[1.3s], total [1.2s]/[7m], memory [4.8gb]->[5.6gb]/[7.8gb], all_pools {[young] [24.3mb]->[10.7mb]/[865.3mb]}{[survivor] [108.1mb]->[108.1mb]/[108.1mb]}{[old] [4.7gb]->[5.5gb]/[6.9gb]}
[2018-10-02T13:03:26,673][WARN ][o.e.m.j.JvmGcMonitorService] [VzhkwYM] [gc][73356] overhead, spent [1.2s] collecting in the last [1.3s]
The server have much more RAM than i think it should need, but what do i know.
Dont seem that the GC heap_used_percent go over 75.. during these messages its around 45%-50%
I should add that we have bought an application that is running ES as storage, and its running 5.3.2.
1 node, 9 index, 45 shards (5+1).
By running this command:
(Invoke-RestMethod -Method Get -Uri "http://localhost:9200/_cluster/stats?human&pretty" -UseBasicParsing).indices.docs.count
I get 170 705 283 (seems high)
When i run this:
(Invoke-RestMethod -Method Get -Uri "http://localhost:9200/*/_search" -UseBasicParsing).hits.total
I get 3 552 172
Whats the difference?
indices.store: 74GB
indices.fielddata.memory_size: 24mb
indices.segments:
count : 763
memory : 150mb
memory_in_bytes : 157368292
terms_memory : 121.7mb
terms_memory_in_bytes : 127707103
stored_fields_memory : 6.8mb
stored_fields_memory_in_bytes : 7136656
term_vectors_memory : 0b
term_vectors_memory_in_bytes : 0
norms_memory : 3.2mb
norms_memory_in_bytes : 3401344
points_memory : 1.6mb
points_memory_in_bytes : 1705713
doc_values_memory : 16.6mb
doc_values_memory_in_bytes : 17417476
index_writer_memory : 0b
index_writer_memory_in_bytes : 0
version_map_memory : 0b
version_map_memory_in_bytes : 0
fixed_bit_set : 156.5mb
fixed_bit_set_memory_in_bytes : 164169744
max_unsafe_auto_id_timestamp : -1
file_sizes :
Running on:
Windows Server 2012 R2
JVM
-Xms8g
-Xmx8g
Will also add that we have alot of nesting fields in our documents, and also includes documents like pdf and doc files.
Dont really know where to start to look, can anyone Point me in the right direction.
Can it be to many shards per index with so many documents?