x1 Data Configuration in 3 zones = 3 nodes in total
x1 Kabana node
x1 APM node
Due to shards running low in the coming months, we want to add another node ie 4 Data Configuration nodes.
Playing around with the Elastic Calculator online, I am not sure how we do this, as we are already using x3 zones. What is the most cost effective method? If I add just one more nodes in the current setup, this will entail first increasing the RAM to 60GB ie a fourfold increase in price, and then increasing the node by +1 , which is now an eight-fold increase in price as we will then increase Data Configuration to x6 nodes + x3 Master nodes which are now required in 3 zones.
Is there an easy way to do this, all we want to do is simply add one more data node. We can even use just x2 zones as opposed to currently 3?
Are you approaching the limit in terms of number of shards per node? Is that why you need more nodes rather than more capacity in terms of larger more powerful nodes?
If that is the case I recommend you read this blog post as you have far too many small shards which is very inefficient. I would recommend you reconsider your sharding scheme and look to reduce that substantially.
Yes shards per node. We previously received a logstash error about "this cluster currently has [2000]/[2000] maximum shards open. We since then increased the node and this has now given us more space in terms of shards, but we are at 2815 now, and I guess our limit will be 3000.
I red this blog, alot of info. What exactly would you recommend we do?
I got no visibility of the performance metrics in our cluster, monitoring is not enabled. Can you please advise e.g. to see shards used by memory heap, retention period, size of shards etc.
get /_cluster/allocation/explain?pretty doesnt give me much.
Form what I can extrapolate, the doc is suggesting:
Use time-based indexing, I believe we are already
Would increasing RAM bolster the number of shards, as I understand 50% of RAM is the memory heap?
Are you suggesting we should shrink the index API to achieve fewer shards?
If you have more than one primary shard per index you can use the shrink index API to reduce shard count. A common way to reduce shard count is to switch from daily to e.g. weekly or monthly indices. You can also start using ILM and rollover to cut indices based on target size rather than just timestamps. If you have split your data across many small indices, e.g. by application or service, you can reduce shard count by consolidating these.
If you delete some data to get some headroom you can change how you index new data so you generate fewer indices and shard per time period. As data ages out of the system the shard count will drop. If you are looking to keep your data for a long time you may need to reindex smaller indices into fewer larger ones.
As far as I can tell you need fewer shards, not more heap or nodes.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.