I ran some heap scaling tests how to find out the required heap for my
My config was ES 1.0.0.Beta2 cluster, 3 RHEL 6.3 nodes, Java 1.8.0-ea JVM
25.0-b56, 4GB heap, G1 GC (and some tuning for segment merge and bulk)
Workload: mixed, scan/scroll query over 1.6m docs plus term queries over
20m docs (unknown queries per second, but higher than 5000) with bulk
indexing (5000 docs per second)
The 4GB exercise result was OOM on all nodes after an hour run, with all
kinds of error messages. The cluster restarted ok afterwards so it did not
matter at all. Increasing heap to 6GB and redoing the exercise succeeded
after 52 minutes.
I just want to share the OOM logs with anyone who might be interested to
have a look, because they are so pretty
FYI I'm considering a memory watchdog on shard level that might detect low
free heap condition in time and can return warnings to the bulk client, so
the bulk client might throttle, suspend, or exit the indexing cleanly,
before OOMs start to break out in the cluster with all the risk of crashing
shards or node dropouts. Surely not an exact science but with some
heuristics it should work (e.g. below a threshold of 10mb free heap there
should no execution of the indexing engine allowed)
Would love to have more time for testing the exciting new 1.0.0.Beta2
features, but right now I'm just happy to run my data reconciliations
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEV9HpzbT5HzdG5s0KspvQC%2Ba153-iTkue4QvRrCiTVxw%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.