Let me first say that due to an API we use, we are atm. locked to ES 0.90.12, so it would be great if any advice you have applies to that version of ES.
We have an ES-cluster with two nodes, each node is a CentOS-VM with 32gb ram, 4 core Xeon cpu and dedicated SSD storage.
Our index is set up with 5 shards, 1 replica and is approaching 50gb fast.
The index serves a webshop, so products and site content is indexed, but the vast majority of the index consists of order history, which is the accumulated order history for all users (web orders, shop orders etc.). New order history is imported every night. Each order history-item is an object containing order head-info and an array of orderlines-objects, nothing too heavy, but at the moment we have 3.5 million of them indexed.
We're using bulk-indexing, 500 at a time.
This was all running well until the index grew to about 40gb, then we started to see a drastic decrease in indexing performance, trying to run the indexing-job now (at 50gb) gives barely any throughput along with an unresponsive cluster. From the bigdesk-plugin, I can see we have huge cpu-spikes when indexing, related (at least in part) to GC.
Things we've tried:
Upping the index refresh to 30s (we have no need for immediate visibility of newly indexed content).
Upping heap memory from 8 to 12 gb.
Upping index-buffer-size to 25%
These changes seem to have helped a bit, but we still have very low performance compared to what we had when the index was smaller.
So that makes me wonder:
Isn't our hardware adequate for our type of scenario? From what I read, people seem to have done more with less.
Are there any further configuration changes we should try? We would be happy to trade some search-performance for indexing-performance.
One of the issues with ES is that it isn't always clear when something uses
heap memory. My guess without seeing the mapping is that you are using the
completion suggester. That uses heap memory that kind of scales with the
size of the index. Is that the case?
We're quering ES through the API I mentioned, which is a component of a major CMS. It provides a typed search, but I'm unsure what queries it ends up sending to ES - we do have autocomplete functionality, but this is implemented by showing the top 10 search results for each new character the user enters in the search box.
I could provide the _mapping output if that would be helpful?
Heap sits at about 7.5 of 12 GB under normal conditions, with low GC, but shoots up (and triggers heavy GC) as we start the indexing job. The bigdesk-diagram of the heap size looks like a spiky mountain range during indexing.
Adding more nodes to the cluster is something I suspect can only be done if we could prove that this would solve our problem due to the cost.
The nodes have 32gb memory, so I guess I could allocate more to the heap, but the GC is very slow for big heaps, I've read, so perhaps not the solution?
Our indexing queries are all sent to one of the nodes (as opposed to dividing them amongst both nodes) does this affect performance much? What about if we add more nodes to the cluster?
You should use 50% of the system RAM for Elasticsearch. That extra 4G may help you some in the short term. GC is only slow if your heap is over 30.5G, so if can you upgrade the RAM on these servers to 64G, you could increase the heap to 30.5G for even more performance.
Otherwise, more nodes will be the best fix for your current issue.
So one node must take the burden of three primary shards while the other runs two primary shards.
You should fix that by balancing it out, and all things will be smooth.
I recommend three nodes with three shards and one replica, or six shards with one replica. This would be balanced out.
Never use two nodes, split brain risk is evident.
If you really want to put extra burden on such a poor node with an uneven number of primary shards, then you should decrease the segment maximum size which is 5 GB by default in 0.90 to say 1 GB. If you decrease, the merging takes much less RAM, runs quicker, and can keep up longer with the indexing pace as your data set grows. But, the number of segments will increase significantly, because you have only 4 cores per node, and also search response times will be slower.
Btw, what happens to the existing segments when you reduce segment size? Will they automatically split or is there a way to force split the 5gb segments into say 1gb segments? Couldn't find too much info on this...
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.