I have a pretty simple ES docker stack deployed with about 4 indexes. One has ~44m objects. I have tried to follow everyones suggestions on here like increase heap space, I even gave it 6 cpu's. It seems no matter what I add, ES will find a way to eat all of the resources. Can someone provide some additional pointers that have worked? Btw the hot threads dump shows:
::: {1CFsBFV}{1CFsBFVXTWqc4bOa8ugrsQ}{fpmCAvUASEu_BMrVjOkXyA}{172.18.0.2}{172.18.0.2:9300}
Hot threads at 2018-08-03T00:31:44.670Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
91.5% (457.6ms out of 500ms) cpu usage by thread 'elasticsearch[1CFsBFV][[index_xxxxxx][3]: Lucene Merge Thread #0]'
5/10 snapshots sharing following 29 elements
org.apache.lucene.codecs.lucene54.Lucene54DocValuesProducer$4.get(Lucene54DocValuesProducer.java:524)
org.apache.lucene.util.LongValues.get(LongValues.java:45)
org.apache.lucene.index.SingletonSortedNumericDocValues.setDocument(SingletonSortedNumericDocValues.java:52)
org.apache.lucene.codecs.DocValuesConsumer$SortedNumericDocValuesSub.nextDoc(DocValuesConsumer.java:449)
org.apache.lucene.index.DocIDMerger$SequentialDocIDMerger.next(DocIDMerger.java:100)
org.apache.lucene.codecs.DocValuesConsumer$3$1.setNext(DocValuesConsumer.java:511)
org.apache.lucene.codecs.DocValuesConsumer$3$1.hasNext(DocValuesConsumer.java:491)
org.apache.lucene.codecs.DocValuesConsumer$10$1.hasNext(DocValuesConsumer.java:1019)
java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1811)
java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:294)
java.util.stream.StreamSpliterators$WrappingSpliterator$$Lambda$1542/350734423.getAsBoolean(Unknown Source)