The 2 other "override" fields that have proven useful so far are:
"resources": {
"cpu": {
"hard_limit": true
}
},
If you set hard_limit
to false
then the cluster gets the entire CPU of the host instead of a sub-set determined by its size. Obviously this is not without risk to the overall platform stability
"overrides": {
"resources": { // (needed for 1.1+)
"cpu": {
"factor": 1.3
}
}
},
A safer way of boosting the CPU for a given cluster is by setting factor
... eg to double the CPU available change from 1.3
(the default) to 2.6
(at the same level as factor
is processors
, an integer which determines how many processors Elasticsearch believes its container has, so will change the thread pool defaults etc - we haven't found any case where this makes a noticeable change over the hand-curated-vs-cluster-size defaults)