unit is M and I use _nodes/stats API
heap_used_in_bytes = 4298.34
heap_used_percent = 26
heap_max_in_bytes = 16067.875
unit is M and I use _nodes/stats API
heap_used_in_bytes = 4298.34
heap_used_percent = 26
heap_max_in_bytes = 16067.875
What the memory is used for ? Something else possible need to be loaded is 0.0
segment_memory_in_bytes = 0.0
terms_memory_in_bytes = 0.0
stored_fields_memory_in_bytes = 0.0
term_vectors_memory_in_bytes = 0.0
norms_memory_in_bytes = 0.0
points_memory_in_bytes = 0.0
doc_values_memory_in_bytes = 0.0
index_writer_memory_in_bytes = 0.0
request_cache = 0.0
query_cache = 0.0
fielddata = 0.0
Hi @panxuelin
Have you indexed any documents into this cluster?
It's a new cluster with no document
What does _cat/nodes?v
and _cat/indices?v
show?
[root@node4 ~]# curl localhost:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
28.28.0.4 26 16 0 0.06 0.07 0.05 mdi * node4
It‘s same with _node/stats API
Are there anybody encounter the same problem ? Just create a new cluster and use python client to monitor. Heap_used_percent is alway high , from 12 percent to 26 percent
self.es_rest = Elasticsearch(
server,
timeout=int(self.timeout),
max_retries=int(self.max_retries),
retry_on_timeout=True)
stats_response = self.es_rest.nodes.stats()
There are a lot of things that will use heap apart from data. Elasticsearch itself use a certain amount and every request to the cluster is likely to use a bit more. This includes requests not directly related to data. If there is no data in the cluster heap usage typically grows slowly though, but it will grow over time as garbage collection usually only kicks in when the heap is 75% full.
The best way to see what the static overhead is is to restart the node and measure it before any requests have been served.
My cluster is a new cluster with no other request cache. I try to restart elasticsearch and send the first request after 5min. The result show that memory size of
query_cache, fielddata and segments is 0. Are there any api to show what elasticsearch use 12% heap memory for?
"query_cache" : {
"memory_size_in_bytes" : 0,
"total_count" : 0,
"hit_count" : 0,
"miss_count" : 0,
"cache_size" : 0,
"cache_count" : 0,
"evictions" : 0
},
"fielddata" : {
"memory_size_in_bytes" : 0,
"evictions" : 0
},
"completion" : {
"size_in_bytes" : 0
},
"segments" : {
"count" : 0,
"memory_in_bytes" : 0,
"terms_memory_in_bytes" : 0,
"stored_fields_memory_in_bytes" : 0,
"term_vectors_memory_in_bytes" : 0,
"norms_memory_in_bytes" : 0,
"points_memory_in_bytes" : 0,
"doc_values_memory_in_bytes" : 0,
"index_writer_memory_in_bytes" : 0,
"version_map_memory_in_bytes" : 0,
"fixed_bit_set_memory_in_bytes" : 0,
"max_unsafe_auto_id_timestamp" : -9223372036854775808,
"file_sizes" : { }
}
"jvm" : {
"timestamp" : 1572331615124,
"uptime_in_millis" : 107789,
"mem" : {
"heap_used_in_bytes" : 2121881128,
"heap_used_percent" : 12,
"heap_committed_in_bytes" : 16848388096,
"heap_max_in_bytes" : 16848388096,
"non_heap_used_in_bytes" : 72268584,
"non_heap_committed_in_bytes" : 79777792,
That looks high for being a completely empty cluster. Even through you do not have any data in the cluster, have you uploaded anything else, e.g. index templates, synonyms etc? Do you have any plugins installed that might use a lot of heap? Do you have any non-default settings in your elasticsearch.yml file?
Completly new Cluster with no plugin, document. Just modify the elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"
thread_pool.write.size: 32
thread_pool.search.size: 64
thread_pool.get.size: 32
thread_pool.write.queue_size: 100
thread_pool.search.queue_size: 3000
thread_pool.get.queue_size: 2000
http.max_content_length: 500mb
network.publish_host: 28.28.0.4
Does the size change if you comment those out and go back to defaults?
How did you arrive at these custom settings? Why are they required? Elasticsearch is not designed to handle very large documents, so. The large http request size is a bit concerning.
http.max_content_length is used to limite bulk size and our document is small. Just avoid extreme case.
I think these custom settings have nothing to do with heap used of cluster now. I just send _nodes/stats request
Did going with defaults make any difference?
No ,same with last time
Another way to poke around for answers here might be to take some heap dumps and see if anything obvious stands out. Below I've recorded a brief screencast of doing this using VisualVM.
Warning: mucking about with heap dumps and profilers in a production cluster is at your own risk!
Edit: That upload resized the gif so it's too small to read. Try this: https://giphy.com/gifs/U7PeHMxToTLIRMtw2D
Thank you so much. I make a try. But video is Blurry, can you repeat a clear one
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.