Hi,
ES_DEV cluster have setup 2 JVMs on each of the 3 servers. 1 master and 1 data on each.
Server 1 master, data & client
Server 2 master & data
Server 3 master + data
Server 1 will have 3rd JVM will act as a client node.
I have configured elastic search server (ES 1.7) with JAVA 8 version. We have setup highly configured RAM (256 GB) and we have given ES_HEAP_SIZE values as 16 GB. Each server will have total 48 cores of CPU. We have created 4 index with shards 5 and replicas given as “0” during bulk indexing after bulk indexing replicas will explicitly change to “1” in yml config file.
Total documents: - 100 million docs
Primary Size: - 750 GB
My config file details:-
index.refresh_interval: -1
action.disable_delete_all_indices: true
indices.fielddata.cache.size: 75%
indices.breaker.fielddata.limit: 85%
bootstrap.mlockall: true
http.max_content_length: 500mb
I also enabled doc values for not_analysed field in mappings
My Questions:-
- Is this correct way of setting up cluster?
- What is the ideal value for number of shards?
-
Should I change any config settings for better bulk index & searching query related performance?
-
Will the ES_HEAP_SIZE=16 GB helps to resolve out of memory error?
Please suggest us any other config settings for better performance of ES..
Thanks,
Ganesh