Hi everyone,
I am currently running one elastic-search instance on a 2 core / 16gb ram / 250 gig machine. My question is how much will this machine be able to handle until I need to expand the cluster?
Currently, the setup I ingesting about 30,000 records a day so the volume is pretty small.
I guess I'm just curious to know what one elastic search instance can handle before I have to expand.
Thanks,
Greg
This not only depends on your machine, but also on the documents you are indexing, the mapping configuration (which decides how much data needs to be written per document on disk).
You can check out our nightly benchmarks - which however run on more beefy machine, so the ultimate way to find out your ingestion rate is to grab rally, our benchmarking tool and run it without your data on your server.
Thanks Alex.
So as part of this, I think one major problem is my setup I currently have 5,332 shards. Based on some reading I'm doing this is a VERY bad approch to things.
I'm currently organizing data like so:
"Client-DataType-Day" - So, for example, a shard can look like this: "test-client-website-www.test.com-2017.07.08"
I think this is creating some serious issues with overhead.
Any recommendations on best practices for organizing shards?
Alex,
So I actually went down this route. But now I have a question about Aliases. Is it possible to filter them by type? I'm currently trying to do it like this:
{
"query": {
"type" : {
"value" : "ola"
}
}
But that does not seem to be working.
Thanks,
Greg
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.