Understanding Elasticsearch performance and correlation with hardware specs

Hello,

Recently I'm doing a lot of tests on a machine with very limited hardware, in particular only 4 GB of RAM.
Running Elasticsearch plus Kibana and a couple of Logstash instances, I'm noticing a lot of situations in which Kibana shows me the Elasticsearch timeout (>30000ms).
I think this is a valuable situation to practice with ELK requirements.

However, I'm unaware of what are the reasons why Kibana is so slow. First thing first, Elasticsearch and Kibana are not showing particular errors. Logstash instances are sending data regularly.
Nevertheless, working on Kibana is so slow and I can't understand why. Probably it is not a surprise, the Discover panel is the slowest page.

On our production server the machine has 16 GB of RAM, and I think this helps much. But... How to understand when Elasticsearch is being "pushed to the limit"? :slight_smile:

I found some very nice documentation (like https://www.elastic.co/blog/found-sizing-elasticsearch) but I think it would be valuable to discuss and summarize where one shoud look to assess if the performances of Elasticsearch are fine for the hardware specs on which it is running.

Thanks :slight_smile:

It'd be worth looking at the hot threads output and seeing if you had any
GCs during the timeouts. Also check for swapping. And iostat.

Hi, at the bottom of this message the (partial, due to character limits on this forum) hot threads output (_nodes/hot_threads) during a long search (without timeout, though). Frankly, I can't read information in there :expressionless: If you could give me some advice on what I should look for

Swapping. AFAIK my swap partition has never been used. RAM consumption by Elasticsearch is about 60% of the total RAM available.

Iostat:

Linux 4.4.0-47-generic (ubuntu) 	11/20/2016 	_x86_64_	(2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          54.97    0.10    7.17    1.53    0.00   36.23

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda             186.95      1686.27      1037.66     992321     610632

BTW, I'm starting Elasticsearch with ES_JAVA_OPTS="-Xms2g -Xmx2g" heap size setting, basing these parameters on the instructions written here: https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html

Thanks for the reply!!! :smiley:

_nodes/hot_threads during a Kibana Discovery:

::: {ducktales-1}{eSpw3bV4RKmtCRwny35ffA}{9-onfn9sSGWbW9YSzAzXrA}{172.18.0.2}{172.18.0.2:9300}
   Hot threads at 2016-11-20T19:40:21.347Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
   
   38.6% (193.1ms out of 500ms) cpu usage by thread 'elasticsearch[ducktales-1][search][T#1]'
     2/10 snapshots sharing following 33 elements
       org.apache.lucene.util.IntBlockPool.nextBuffer(IntBlockPool.java:155)
       org.apache.lucene.util.IntBlockPool.newSlice(IntBlockPool.java:168)
       org.apache.lucene.util.IntBlockPool.access$200(IntBlockPool.java:26)
       org.apache.lucene.util.IntBlockPool$SliceWriter.startNewSlice(IntBlockPool.java:274)
       org.apache.lucene.index.memory.MemoryIndex.storeTerms(MemoryIndex.java:623)
       org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:526)
       org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:496)
       org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:472)
       org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:447)
       org.apache.lucene.index.memory.MemoryIndex.addField(MemoryIndex.java:364)
       
[...]

You can get around the character limits by linking to a gist.

Though it is cut off, I wonder if the MemoryIndex there is from highlighting. I've seen that come up before. In that case a multi-term query like a* or aaaa~ can cause trouble.

the -Xms/-Xmx is fine.

No swapping is good. Looks like something nasty is eating the CPU like the highlighting I was guessing. Can you post a gist of the whole hot_threads?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.