I think my heap is sized right but searches are still super slow


(red der) #1

The config on my nodes is:

  • Running version 5.3.0 of ES,Logstash, and Kibana
  • 8 CPUs
  • 250gig SSD drives
  • They have 40 Gigs of RAM
  • I allocated 16 gigs to the heap
  • 5 ES nodes
  • Index configured for 5 shards, 1 replica
  • avg shard size for the index is about 20 gigs (a little over 20,000,000 docs)
  • avg CPU utilization on the nodes doesn't increase past 10%
  • Just searching in the Kibana discover page without any visualizations or aggs
  • Searches for the past few hours are very quick but searching back more than 12 hours takes almost a minute, 24 hours can take longer than 90 secs

The bulk thread pool queue size on my nodes hovers around 1-2 consistently and I see no other queue types build up.

I see no rejections.

Here is a chart representing my heap metrics (16 gigs allocated heap, the green line) and heap being used (the blue line):

All my ES nodes exhibit this same pattern.

The heap in use rises to about 10.84 gigs before dropping off. From what I understand this is a health pattern for this metric. It has 16 gigs and GC starts when the heap in use hits 75% (which would be 12).

So why are my searches still slow :(. When I search back 12 hours or more it churns and occasionally kibana times out (I currently have kibana set to 90 sec timeout).

I know this could also depend on my index itself, but I want to rule out any server config first because I like my index the way it is.

What other metrics should I investigate here? I can give them more memory but should I also allocate more to the heap or is that in a good place right now? From what I understand non-heap memory is used by lucene so that would improve search performance right?


(Mark Walkom) #2

What version? What sort of aggs across what sort of data?
Is that the Monitoring plugin?


(Christian Dahlqvist) #3

What does CPU usage look like? What is your average shard size?


(red der) #4

Updated my post with more info on shard size etc.

CPU seems way under-utilized. I have made no changes to ES thread pool configs or anything like that yet.


(red der) #5

I thought about doubling the thread pool size for indexing and also increasing it for indexing but elasticsearch docs say that is usually a No-No do you think it's permissible in my case?

I could give them more CPUs but not sure if that would improve query time. The es docs actually say elasticsearch is generally low CPU (relative to other resources) and recommend around 8 cores per node. I could give them 12 cores do you think that would help?


(Christian Dahlqvist) #6

What kind of hardware is this cluster deployed on? Is it bare metal or VMs?

When queries get slow, how many shards are you querying? How are you querying - what kind of filters are you applying?


(red der) #7

VMs, usually just 5 to 10 shards. I'm just using the discover pane in kibana.

But with my current config are there any red flags?


Will increasing the number of shards to utilize more CPU and improve performance?
(Christian Dahlqvist) #8

I do not see anything obviously wrong with it, but as you are using VMs I would recommend that you verify that Elasticsearch actually has access to the resources you have specified. memory ballooning or overallocation of CPU can have significant negative impact on performance.

Each query or aggregation runs single-threaded against each shard, so as you are querying a reasonably small number of shards you may not be able to use all your CPUs. If you are running some type of expensive queries or aggregations, e.g. wildcard queries, this could result in slow performance as the each single shard may take a while to process.


(red der) #9

Cool thanks a lot for the tips! All the memory allocated to the VMs is reserved so fortunately issues related to dynamic allocation and ballooning can be ruled out.

As far as resource contention with CPUs- I don't see anything there, but I could dig into that further.

I said 5-10 shards, but really it would probably be double that because I wasn't thinking about replicas. So more like 10 to 20.

My shards are quite large but I have it configured for one per node which I believe is usually ideal. But as you say my shards are big and few in number.

Would breaking up my index into more shards help me get more out of my CPUs?

Thanks again


(system) #10

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.