How much data can one Standalone instance handle?

Hi everyone,
I am currently running one elastic-search instance on a 2 core / 16gb ram / 250 gig machine. My question is how much will this machine be able to handle until I need to expand the cluster?

Currently, the setup I ingesting about 30,000 records a day so the volume is pretty small.

I guess I'm just curious to know what one elastic search instance can handle before I have to expand.
Thanks,
Greg

This not only depends on your machine, but also on the documents you are indexing, the mapping configuration (which decides how much data needs to be written per document on disk).

You can check out our nightly benchmarks - which however run on more beefy machine, so the ultimate way to find out your ingestion rate is to grab rally, our benchmarking tool and run it without your data on your server.

--Alex

Thanks Alex.
So as part of this, I think one major problem is my setup I currently have 5,332 shards. Based on some reading I'm doing this is a VERY bad approch to things.

I'm currently organizing data like so:
"Client-DataType-Day" - So, for example, a shard can look like this: "test-client-website-www.test.com-2017.07.08"
I think this is creating some serious issues with overhead.

Any recommendations on best practices for organizing shards?

image

you could store all that data in a single index and make use of aliases with a filter

Alex,
So I actually went down this route. But now I have a question about Aliases. Is it possible to filter them by type? I'm currently trying to do it like this:
{
"query": {
"type" : {
"value" : "ola"
}
}

But that does not seem to be working.
Thanks,
Greg

you can try filter by _type - also note that types will be deprecated in the future, so you might want to change your index strategy.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.