Today my ES cluster on Elastic cloud frozen due to high CPU consumption. There was not too much pressure with requests so I took a look at indexes to check if there is some problem.
I found a lot of old indexes from apm that were almost empty but of course they took 1 shard for each one.
After a bit of cleaning I arrived to this point:
- 213 indexes (only 10 are mine, the other are kibana and hidden indexes)
- 21,135,526 documents
- all indexes take 6GB of disk space
- primary shards 213
- 1 node 59.60GB free disk space, with 2GB RAM
This is a screenshot of the first 100 indexes:
It seems to mee that
monitoring-es are becoming huge. In 3 days there are 3GB of indexes.
- Do you think is normal the size of monitoring-es*?
- I don't see any rolling policy on monitoring-es*. Should I create it?
- Could the sudden increase of those indexes one of the causes that hit the CPU?
Any advice is appreciated, thanks.