Need some expert query performance help - cluster is in bad shape

We have a 7 node ES cluster in EC2, and query performance is far worse than I would expect given the load. I have tried to rewrite the query syntax from filters to queries (as our retrieval is all against non_analyzed, exact term matches) - but that has had no real improvement. We frequently see spikes of 100% CPU. Recently one node when to 1000 thread pool search queue (while at 100% cpu) and stayed there for about 15 minutes. Searches in the slow log were taking 8 to 10 seconds to complete. We seem to go through these periods when a tiny bit additional load, just seems to make the cluster on the brink of tipping over.

Here is a snippet of the search syntax => https://gist.github.com/jaydanielian/e374a401560f3e3b1812#file-gistfile1-txt

Here is a snapshot from hot_threads =>

  • We are using EBS for the disk storage, and I do see a fair amount of disk reads, but wondering why that would also cause CPU to spike and hold at 100%
  • I see several cache evictions per second (4 or 5), even though I dont use filters anymore since our searches are all specific (not going to be reused), so not sure why its still cache churning things
  • We have custom routing enabled, so we only hit one shard per query
  • Our index is 1 segment and optimized as it is read only
  • We have six shards with 820 million docs total in the index (118 GB)
  • we use the multisearch API to bulk our requests together usually in chunks of 50 or 100
  • during these CPU spike times it is not uncommon for 2 to 3 nodes to be 100% CPU and the others to be virtually idle on CPU.

Any gurus out there who can offer some guidance? We are at the end of our rope dealing with this here, and may have to consider moving off ES if we can't get our cluster to handle the current load more efficiently.

Thanks!

J

You're using nested queries and they use a bitset internally to remember which docs are parents. This is held in the filter cache. Evictions would be undesirable.

Thanks for the reply. So, should we increase the size of the filter cache? Or is it really best to restructure the document to not require so many nested queries? Again, our queries/requests are such that it is highly unlikely that return queries are going to be re-used - so filter caching (cache misses really) would hurt performance. These are all array term matching queries.

Here is a link to our mapping => https://gist.github.com/jaydanielian/80f21ccfdb57f3d6d527#file-gistfile1-json

Our main use case is to be able to search for contacts with last_name, and (email or phone). These values can be arrays - usually we are searching for a single last_name, with 1-2 emails and maybe 1-2 phones at most.

Thanks!

J

The filter you need to retain is the bitset that identifies all of the parents as it is information that is reused for every nested query - regardless of the individual search terms being queried in them. This slide deck outlines the bitset's central role: http://www.slideshare.net/MarkHarwood/proposal-for-nested-document-support-in-lucene
It's an old slide and elasticsearch varies slightly in implementation but the principle is the same.

If you keep evicting it every nested query will rebuild it.

OK, that would explain high CPU spikes for what seems to be normal type of load. I guess we should try to really bump up our RAM allocation for filter cache then? Which I believe is this setting => indices.cache.filter.size

Right now it is the default (10%?). Judging from the output on the Bigdesk plugin, the filter cache is capped at 200MB. Any thoughts as to what size we should use to enable us to properly cache the top level parent location bitset for our index size (118GB - 820 million docs total, 6 shards)?

Also, I assume I need to restart the node for the new cache.filter.size setting to take effect - correct?

I really appreciate your input here, as we were at a total loss on how to fix this.

Thanks!!

J

Cost is obviously 1 bit per doc (remember to count both root and nested docs) with additional overheads if you do a lot of updates/deletes - redundant docs aren't physically removed until merge operations purge old content. The stats apis should report both live and deleted docs so use those figures for the numbers of bits.

Just to complete this in case anyone is searching this in the future... Short story is that cache evictions can be a cause of high CPU and poor query performance.

Using your formula, I determined I needed 400MB currently of filter cache. Previously it seemed to be capped at around 200MB. So I set our filter cache size at 850MB via this command:

curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"indices.cache.filter.size" : "850mb"
}
}'

Once that went into effect - the same queries that would cause the CPU to spike and hold for the duration of execution, now saw small normal blips in CPU. Our queries are returning in sub second fashion again, cache evictions are 0.

I just want to follow up and thank you for this tip. I would have never discovered this without your insight - so thanks a million!!

J

No worries. Glad it worked out for you :slight_smile:

for tested query, you might want to keep an eye on the metric id_cache

$ curl 'localhost:9200/_cat/nodes?v&pretty&h=id_cache.memory_size'
id_cache.memory_size 
                  0b 
                  0b 
                  0b 

hth

jason