Recently I'm doing a lot of tests on a machine with very limited hardware, in particular only 4 GB of RAM.
Running Elasticsearch plus Kibana and a couple of Logstash instances, I'm noticing a lot of situations in which Kibana shows me the Elasticsearch timeout (>30000ms).
I think this is a valuable situation to practice with ELK requirements.
However, I'm unaware of what are the reasons why Kibana is so slow. First thing first, Elasticsearch and Kibana are not showing particular errors. Logstash instances are sending data regularly.
Nevertheless, working on Kibana is so slow and I can't understand why. Probably it is not a surprise, the Discover panel is the slowest page.
On our production server the machine has 16 GB of RAM, and I think this helps much. But... How to understand when Elasticsearch is being "pushed to the limit"?
I found some very nice documentation (like https://www.elastic.co/blog/found-sizing-elasticsearch) but I think it would be valuable to discuss and summarize where one shoud look to assess if the performances of Elasticsearch are fine for the hardware specs on which it is running.
Hi, at the bottom of this message the (partial, due to character limits on this forum) hot threads output (_nodes/hot_threads) during a long search (without timeout, though). Frankly, I can't read information in there If you could give me some advice on what I should look for
Swapping. AFAIK my swap partition has never been used. RAM consumption by Elasticsearch is about 60% of the total RAM available.
::: {ducktales-1}{eSpw3bV4RKmtCRwny35ffA}{9-onfn9sSGWbW9YSzAzXrA}{172.18.0.2}{172.18.0.2:9300}
Hot threads at 2016-11-20T19:40:21.347Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
38.6% (193.1ms out of 500ms) cpu usage by thread 'elasticsearch[ducktales-1][search][T#1]'
2/10 snapshots sharing following 33 elements
org.apache.lucene.util.IntBlockPool.nextBuffer(IntBlockPool.java:155)
org.apache.lucene.util.IntBlockPool.newSlice(IntBlockPool.java:168)
org.apache.lucene.util.IntBlockPool.access$200(IntBlockPool.java:26)
org.apache.lucene.util.IntBlockPool$SliceWriter.startNewSlice(IntBlockPool.java:274)
org.apache.lucene.index.memory.MemoryIndex.storeTerms(MemoryIndex.java:623)
org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:526)
org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:496)
org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:472)
org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:447)
org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:364)
[...]
You can get around the character limits by linking to a gist.
Though it is cut off, I wonder if the MemoryIndex there is from highlighting. I've seen that come up before. In that case a multi-term query like a* or aaaa~ can cause trouble.
the -Xms/-Xmx is fine.
No swapping is good. Looks like something nasty is eating the CPU like the highlighting I was guessing. Can you post a gist of the whole hot_threads?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.