Index Shard Number in Poor Performing Setup

(webish) #1

I'm seeing 40ms response times for queries using term and a custom ID field
such as myId: 1234

I've inherited an Elastic Search database and application code. Currently
there are two nodes and two main indexes. Index "User" is user profile
data and Index "News" is event date in time series.

When running jmeter concurrency tests I'm seeing a very linear increase in
response time to any any API that contains Elastic Search queries.
Response time starts around 300ms and increases to 22s for a concurrency
of 35!!!

There are many issues that have been uncovered.

Index "User" has 15 shards and 1 replica. ~2M docs

Index "News" has 15 shards and 1 replica. ~10M docs

The news index is going to partitioned to smaller indexes perhaps per day.
That work has NOT began yet.

However, I think the shard size is far too large. The default being
something like 5 shards 1 replica. It's my understanding that a large
shard number on a single node decreases performance significantly. In our
case, that means 7 and 8 shards per index on two nodes.

Is it possible for me to migrate the index to a new one with less shards?
Would this be recommended?


You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit
For more options, visit

(system) #2