429 Too many requests

Hi,
I am using AWS Elasticsearch Cluster with 3 m4.xlarge as master nodes and 2 c4.xlarge as data nodes. we are having more write intensive task logging and some other stuff using serverless (lambda). But i thought cluster is big enough for our needs can someone look into these attached stats of cluster. Kindly let me know if i could improve the performance without up scaling or if up scaling is the only solution to this problem.

health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana C6Zu-tLsTe6XjobXUcHNNw 1 1 47 5 1.3mb 688.6kb
green open canonical okYxM_yfSi2NPTD2zyexTw 5 1 69 0 962.3kb 481.1kb
green open canonical2 C3ZlX9RTTqC2d_CrwiIc3g 5 1 87999 2774 105.3mb 52.6mb
green open canonicalreference URt7j7-5Sc2hszwAPt18Yg 5 1 1923626 27664 8.1gb 4gb
green open canonicalreferenced gKbdJ5DSS46nhTAT8kGNGQ 5 1 406085 46215 2gb 1gb
green open classifications hsGkGZTGTneqPCB8Mn2apQ 5 1 24265 0 9.6mb 4.8mb
green open cwl-2019-01-23 eeoWZ02GRWWrU-sLZjExFw 5 1 0 0 2.5kb 1.2kb
green open cwl-2019.01.07 FAG8zJhESDii3pN57zpz4w 5 1 41951 0 87.4mb 43.7mb
green open cwl-2019.01.08 uHtohzEIQQ29DLeU19hoYw 5 1 38243 0 76.4mb 38.2mb
green open cwl-2019.01.11 rbZhzEm9Rm6HcK-LYhfIXA 5 1 72 0 688.6kb 344.3kb
green open cwl-2019.01.12 XSoUcg5dT5W6Ms5G-MvUhw 5 1 34 0 924.7kb 462.3kb
green open cwl-2019.01.14 bKVa8f60T5Otx3NsTbi_Ww 5 1 116 0 1mb 534.8kb
green open cwl-2019.01.15 krK5aJbKTSulFhY2AFgQkg 5 1 39623 0 96.4mb 48.2mb
green open cwl-2019.01.16 gqFFuD9iSoCbw5VQKuOcJA 5 1 21287 50 55.8mb 27.9mb
green open cwl-2019.01.17 CcKP7cBcRxiegVo2OzWn8g 5 1 42473 30 100.7mb 50.3mb
green open cwl-2019.01.18 XjEWdXloSV2Z3I7u-xuRHA 5 1 3659 0 10.5mb 5.2mb
green open cwl-2019.01.20 ugu1aDR-SdWDDYLGdtrANQ 5 1 168352 150 394.5mb 197.2mb
green open cwl-2019.01.21 feMeFsrWT2WJhdkfJg53tg 5 1 1191886 462 2.6gb 1.3gb
green open cwl-2019.01.22 cyJfkkPFTF2AN_zDSUD7NQ 5 1 849615 23981 2gb 1gb
green open cwl-2019.01.23 9dLd3boWRle1cpuBb-CVrQ 5 1 222460 7010 555.1mb 277.5mb
green open cwl-2019.01.24 f1cUJVL-TVGo5gjUMPfsNQ 5 1 393201 1239 1003.1mb 501.5mb
green open cwl-2019.01.25 ttQ4mV2eQiWWZ4UABTzSaQ 5 1 401 0 2.1mb 1mb
green open cwl-2019.01.26 PyD74xtBQFi7bbSxYtFy5A 5 1 12 0 391.1kb 195.5kb
green open cwl-2019.01.28 8igksaDQR9O_-EGqpj80hA 5 1 142669 762 362.6mb 181.3mb
green open cwl-2019.01.29 D-NnqhTLQ6iJjz8md28f-g 5 1 63452 100 144.8mb 72.4mb
green open cwl-2019.01.30 jediOXspTO-mXC9CfBJ01A 5 1 455025 1167 1gb 513.9mb
green open cwl-2019.01.31 PHfzB-uHRiyEx_e0GPGjbA 5 1 87459 221 219.4mb 109.6mb
green open cwl-2019.02.01 Ukmd1WVdS9-9yfbAdCFyvA 5 1 1029616 4828 2.4gb 1.2gb
green open cwl-2019.02.02 LrxSr_bjQt-ilMgz2uEzbQ 5 1 976646 349 2.2gb 1.1gb
green open cwl-2019.02.04 yT7sppZ_R6euWD5bQaYnFA 5 1 102 0 928.2kb 528.8kb
green open cwl-2019.02.05 3FD5F6OGRWurUMHON4y_og 5 1 1207 22 4.1mb 2.1mb
green open cwl-2019.02.06 BIa1FWG_RaaIIKtdEJrf9g 5 1 1774587 7986 4.2gb 2.1gb
green open cwl-2019.02.07 hfCcUVB9TOuh-4ngtU9ggA 5 1 3746 21 11mb 5.4mb
green open cwl-2019.02.08 48KyewUHRY2xk3tAiJwDAg 5 1 95 0 1mb 565.1kb
green open entities 2n0Y5_YsSMmsGujPF8jTPA 5 1 22 0 2.8mb 1.4mb
green open graphson sUKIJnBhT4yKaDpqntwvaA 5 1 69862 13540 971.1mb 485.5mb
green open graphsoncanonical O3jFL2daQCqhj7cIXHHaOg 5 1 1882522 17258 6.6gb 3.3gb
green open mappingfiles ZURK1K33RA2bqe4e3cE9Ow 5 1 22 0 484.1kb 242kb
green open user GYliU21RR8u2Lrv4qcwErw 5 1 1 0 235.7kb 117.8kb
green open users orWsJmYITOC6gq_FZIXUWA 5 1 912 0 446.2kb 223.1kb
green open processingfilenames cUdVjGvrSnCj7miX4hQUPw 5 1 1915585 9108 342.5mb 171.2mb
green open mappingfiles FeoTm6AaQvu3Cw8IsF4UEg 5 1 2 0 334.4kb 167.2kb
green open token oDCgLlqvRX-M1WXjBXE66g 5 1 1 0 10.9kb 5.4kb
green open specs u0LjsVxYQ6S3mbJTV3f1Sg 5 1 210 0 5.9mb 2.9mb

You have too many shards. Most of your daily indices could easily be converted to a monthly one with 3 shards.

Hi Mark,
Thanks for the reply, so reducing number of shards for daily indices and reduce refresh interval will improve performance? And if you could also put some light on issue i observe that even if cluster is idle memory 32 GB is at high usage like 95% on average. What could causing this and how could i reduce this memory footprint.

I would recommend watching this webinar which discusses optimising storage in order to reduce heap usage and be able to hold more data per node.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.