Is it normal to have 6 ES nodes with 100GB heap size each on 6x 192GB RAM servers?

Our production ES cluster has 10 nodes in which:
6 x 192 GB RAM for hot indexes (data within 2 weeks)
2 x 32GB RAM for cold indexes (data older than 2 weeks)
2 x 32 GB RAM client nodes/master only nodes.

  • Running ES 1.7.1
  • Application logs are stored into daily indexes with around 70GB each index, high EPS I guess.
  • Each node with 192GB RAM has an ES instance with heap size of 90GB and index.store.type: memory. Hot indexes are stored in RAM only, cold indexes will be moved to 2 cold ES nodes.
  • Data on ES are solely for full text searching, and each message is relatively large.

Some of my questions:

  1. Is it a good setup/config to have ES node with 90GB of ES_HEAP_SIZE even when using index.store.type: memory?
  2. Would it be better to switch to SSD and run more ES instances on 6x 192GB RAM servers with ES_HEAP_SIZE 31G?
  3. Any suggestion on setting up a cluster that indexes around 70 to 100GB per day for 2 weeks and provides fast searching/querying?

I keep seeing recommendations that we should keep ES_HEAP_SIZE <= 30.5G, how about my case with 90G? Unfortunately, I have not had enough info on the cluster.

Thanks,

No.

No, see your later comment about 30.5GB, we state that for a reason.

How about this one?

This recommendation was useful on JDK 7 but as long as you're on JDK 8, you can get very close to the 32GB boundary and not exceed the limit for compressed oops (for example, 31900m should be fine). In Elasticsearch 2.2.0, we now print on startup whether or not you've crossed the threshold.

However, do note that your goal should be to minimize the size of the heap, not maximize it.

index.store.type: memory is no longer supported since ES 2.0?
I'm on JRE 8 now. Is there any difference running ES with JRE vs. JDK?

Correct, it is removed.

There's no difference from a runtime perspective, but the JDK comes with tools that are occasionally useful for debugging.

We are going to have an ES cluster which holds a daily index of 60 to 100 GB and keep daily indices for at least two weeks. ES receives application logs from 70+ servers and serves full text searching. Do you have any recommendation on the number of nodes and configurations?

We also have another cluster with same amount of data but is primarily for aggregation in Kibana. Any suggestion?

The community here is very happy to answer very targeted questions, but that question is far too broad (and lacks specific requirements).