I recently finished deploying an ES/Logstash cluster for a small
environment. It's a two-node local cluster but I'm getting horrible
performance and frequent crashes. I'll soon be standing up a cluster in
another environment that's roughly 10x the size of this first one so I've
got to figure out how to better optimize the clusters. Here's the current
resource and indexing stats:
8 CPU, 64GB RAM, 1TB storage
140 log sources, 21 indexed fields, ~35,000 message per minute, ~60GB/day
At present, index/search performance is awful. If I search for any time
period that's larger than a day or so, ES will usually crash and require a
manual restart. I'm going to be standing up a second dedicated ES system
and I'll be configuring one host for indexing and the second for searching.
I'll also be enabling the mmapfs store, disabling the _all field, and
disabling storing and/or indexing on some of the fields. At least that's my
plan so far -- it makes sense in my head but I'm not sure if it will
actually be the most efficient solution.
If I'm doing something stupid or if anybody has other recommendations, do
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e12e8578-d32a-4e1b-b74d-b1e0073dfa14%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.