Performance degrading after a couple of weeks

on ES 6.4: I'm finding that over the course of 20 days Elastic Stack grinds to a very slow crawl. JVM heap gets to 99.9% and nearly everything times out. if I close indexes over the previous couple of weeks JVM moves to <50% and it's smooth as silk again.

Any idea where to start looking?

How much data do you have in the cluster? How many indices and shards? Have you optimised your mappings?

It's about 16-20g a day, 5 indexes with all having 2 shards with the exception of one that has 5 shards.

I have mappings in for all but two indexes, one is very small <5m/day the other is about 200m/day

6 indices with a total of 15 primary shards is going to result in quite small shards. Over 20 days that is 300 shards. We generally recommend having reasonably large shards as having lots of small ones can be inefficient.

It also sounds like you do not have enough heap space for the data and mappings you have in place. What is the specification of the nodes with respect to CPU, RAM and heap? What type of storage do you have? What is the size of the cluster? If you can share the full output of the cluster stats API, it would also provide us with valuable information about the cluster.

Thank you for your reply. I guess I was totally misunderstanding how to allocate shards. I'll work on lowering that to 1 shard for each index and then merge.

just 1 node for everything right now. i'm not sure about the CPU Power but it's 8 processors, 8 gigs of ram, 2gig allocated to Heap, disk space is 500 gigs which a month or so fills up half of that. As far as I know it's regular magnetic disks.

Assuming Elasticsearch is the only thing running on that node, we generally recommend to use 50% of the memory for the heap. 2GB sounds a bit small for your setup.

i think the reason I had 2gb was logstash was tossing me jvm errors until I raised that on to 2gb also.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.