Horrible indexing performance as index size grows

Let me first say that due to an API we use, we are atm. locked to ES 0.90.12, so it would be great if any advice you have applies to that version of ES.

We have an ES-cluster with two nodes, each node is a CentOS-VM with 32gb ram, 4 core Xeon cpu and dedicated SSD storage.

Our index is set up with 5 shards, 1 replica and is approaching 50gb fast.

The index serves a webshop, so products and site content is indexed, but the vast majority of the index consists of order history, which is the accumulated order history for all users (web orders, shop orders etc.). New order history is imported every night. Each order history-item is an object containing order head-info and an array of orderlines-objects, nothing too heavy, but at the moment we have 3.5 million of them indexed.

We're using bulk-indexing, 500 at a time.

This was all running well until the index grew to about 40gb, then we started to see a drastic decrease in indexing performance, trying to run the indexing-job now (at 50gb) gives barely any throughput along with an unresponsive cluster. From the bigdesk-plugin, I can see we have huge cpu-spikes when indexing, related (at least in part) to GC.

Things we've tried:

Upping the index refresh to 30s (we have no need for immediate visibility of newly indexed content).
Upping heap memory from 8 to 12 gb.
Upping index-buffer-size to 25%

These changes seem to have helped a bit, but we still have very low performance compared to what we had when the index was smaller.

So that makes me wonder:

  • Isn't our hardware adequate for our type of scenario? From what I read, people seem to have done more with less.
  • Are there any further configuration changes we should try? We would be happy to trade some search-performance for indexing-performance.

Any help with this would be greatly appreciated.

One of the issues with ES is that it isn't always clear when something uses
heap memory. My guess without seeing the mapping is that you are using the
completion suggester. That uses heap memory that kind of scales with the
size of the index. Is that the case?

We're quering ES through the API I mentioned, which is a component of a major CMS. It provides a typed search, but I'm unsure what queries it ends up sending to ES - we do have autocomplete functionality, but this is implemented by showing the top 10 search results for each new character the user enters in the search box.

I could provide the _mapping output if that would be helpful?

I forgot to mention that _segments shows there's about 400 segments in the index now - would a force merge/optimize help?

Showing the mapping would be useful

Like @nik9000 said, I think it's going to be an over-consumed heap. If you look, you'll likely find:

  1. A total heap usage hovering near to considerably over 65% all the time
  2. Frequent garbage collection, with very little memory being freed

These conditions would both contribute to slower indexing, but are actually the real problem. Slow indexing is just a symptom.

A recommended solution would be adding more nodes to your cluster. This spreads the load that is filling the heaps of 2 nodes across more JVMs.

@forloop
@nik9000

Here is the mapping-output as requested :http://ge.tt/1DXmY8f2

Alternative link that doesn't warn you about viruses: http://expirebox.com/download/a48761be1b70af2f3f04663a8c39d69b.html

@theuntergeek

Heap sits at about 7.5 of 12 GB under normal conditions, with low GC, but shoots up (and triggers heavy GC) as we start the indexing job. The bigdesk-diagram of the heap size looks like a spiky mountain range during indexing.

Adding more nodes to the cluster is something I suspect can only be done if we could prove that this would solve our problem due to the cost.

The nodes have 32gb memory, so I guess I could allocate more to the heap, but the GC is very slow for big heaps, I've read, so perhaps not the solution?

Our indexing queries are all sent to one of the nodes (as opposed to dividing them amongst both nodes) does this affect performance much? What about if we add more nodes to the cluster?

You should use 50% of the system RAM for Elasticsearch. That extra 4G may help you some in the short term. GC is only slow if your heap is over 30.5G, so if can you upgrade the RAM on these servers to 64G, you could increase the heap to 30.5G for even more performance.

Otherwise, more nodes will be the best fix for your current issue.

At 12gb heap size we're experiencing GC-times of upwards of 40 seconds, wouldn't this get worse if the heap is bigger?

[2016-09-28 21:53:52,952][WARN ][monitor.jvm ] [nofainmetaprod02] [gc][old][40084][37] duration [39.9s], collections [1]/[40.2s], total [39.9s]/[16.3m], memory [11.2gb]->[7.6gb]/[11.9gb], all_pools {[young] [8mb]->[8.2mb]/[532.5mb]}{[survivor] [66.5mb]->[0b]/[66.5mb]}{[old] [11.2gb]->[7.6gb]/[11.3gb]}

You have a simple design flaw:

  • two nodes
  • five shards, one replica

So one node must take the burden of three primary shards while the other runs two primary shards.

You should fix that by balancing it out, and all things will be smooth.

I recommend three nodes with three shards and one replica, or six shards with one replica. This would be balanced out.

Never use two nodes, split brain risk is evident.

If you really want to put extra burden on such a poor node with an uneven number of primary shards, then you should decrease the segment maximum size which is 5 GB by default in 0.90 to say 1 GB. If you decrease, the merging takes much less RAM, runs quicker, and can keep up longer with the indexing pace as your data set grows. But, the number of segments will increase significantly, because you have only 4 cores per node, and also search response times will be slower.

Thanks, that is really helpful! Will try adding another node and setting up a new index.

@jprante

Btw, what happens to the existing segments when you reduce segment size? Will they automatically split or is there a way to force split the 5gb segments into say 1gb segments? Couldn't find too much info on this...

1 Like