Topology configuration in medium seized cluster

We have a small business running a medium sized ES v2.3.1 cluster currently consisting of 7 nodes with 8 CPUs and 30gb of RAM.

There are two types of data being stored:

  • Static indices of 20M records (about 20gb each), 5 shards, 0 replica per index
  • Streaming day-based indices of 25gb each, 5 shards per index, 1 replica (spread over 6/7 nodes)

At the moment our topolgy if simply making all of the servers master/data/client and then letting it do its thing. No dedicated nodes for anything.

The cluster currently indexes 30 items/second, and serves up to 1000 queries per second, and the total size of the cluster is now around 3TB on HDDs.

Our cluster is however not performing very well, and we want to optimize it but are unsure of what to do. Our options as I see it is:

  • Add more nodes
  • Use dedicated master/client/data nodes (if yes, which topology?)
  • Use SSDs

Any suggestion as to how we should proceed?

What does that mean exactly?
How are you measuring this?

What I was trying to say is that our cluster is currently not fulfilling our business requirements, and we need to improve/scale it (mainly searches taking too much time, we have a lot of complicated queries).

But, before just throwing more hardware at it, I'm trying to figure out if there's something that can be done on a lower lever, like improving the topology.

Have you identified anything that is limiting performance? What does CPU utilization and disk I/O, e.g. iowait, look like?

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.