Hey, I am struggling on the good settings few times and have some heap issue. I have from Jan 1th a daily logstash indices and run around 650 shards ????
Daily indices are around 3mb.
**My config is one node, bootstrap.memory_lock: true, **
xpack.ml.enabled: false
**action.auto_create_index: true **
**xpack.monitoring.enabled: false **
xpack.security.enabled: false
Javaheap are -Xms470m -Xmx470m.
I would like to have indices on memory only from the 5 last days only for instances.
Have the rest in the disk at rest. My usage is mostly to query datas on the same or previous day, very very rarely going a lot backward.
When I going below the javaheap I have error and elastisearch crash. using 470 I am thinking PI is getting very hot in a week.... i see that the CPU increase.
**What would you suggest as parameters config ?
{
**  "_shards": {**
**    "total": 1254,**
**    "successful": 625,**
**    "failed": 0**
**  },**
"_all": {
"primaries": {
"docs": {
"count": 1810472,
"deleted": 515911
},
"store": {
"size_in_bytes": 762656043,
"throttle_time_in_millis": 0
},
"indexing": {
"index_total": 8132,
"index_time_in_millis": 685743,
"index_current": 0,
"index_failed": 15
},
"get": {
"total": 3334,
"time_in_millis": 2560,
"logstash-2018.05.04": {
**      "primaries": {**
**        "docs": {**
**          "count": 5293,**
**          "deleted": 165**
**        },**
**        "store": {**
**          "size_in_bytes": 2699851,**
**          "throttle_time_in_millis": 0**
**        },*
"flush": {
"total": 5,
"total_time_in_millis": 0
},
"segments": {
"count": 25,
"memory_in_bytes": 228032,
"terms_memory_in_bytes": 197031,
"stored_fields_memory_in_bytes": 7048,
"term_vectors_memory_in_bytes": 0,
"norms_memory_in_bytes": 0,
"points_memory_in_bytes": 693,
"doc_values_memory_in_bytes": 23260,
"index_writer_memory_in_bytes": 0,
"version_map_memory_in_bytes": 0,
"fixed_bit_set_memory_in_bytes": 0,
"max_unsafe_auto_id_timestamp": 1525392003044,
"file_sizes": {}
},
"translog": {
"operations": 0,
"size_in_bytes": 430
}
"total": {
"docs": {
"count": 5293,
"deleted": 165
},
"store": {
"size_in_bytes": 2699851,
"throttle_time_in_millis": 0
},
"indexing": {
"index_total": 0
}
"merges": {
"current": 0
"cache_count": 0,
"evictions": 0
},
"fielddata": {
"memory_size_in_bytes": 0,
"evictions": 0
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 25,
"memory_in_bytes": 228032,
"terms_memory_in_bytes": 197031,
"stored_fields_memory_in_bytes": 7048,
"term_vectors_memory_in_bytes": 0,
"norms_memory_in_bytes": 0,
"points_memory_in_bytes": 693,
"doc_values_memory_in_bytes": 23260,
"index_writer_memory_in_bytes": 0,
"version_map_memory_in_bytes": 0,
"fixed_bit_set_memory_in_bytes": 0,
"max_unsafe_auto_id_timestamp": 1525392003044,
"file_sizes": {}
},
"translog": {
"operations": 0,
"size_in_bytes": 430
}
},
) ? That''s ok if i can't but Is it possible to list the shards in memory and to limit the last XXX shards the system uses ? (if i can not limit indices may i limit the current maximum active shards) ?