Now elasticsearch is going slow. It have 484 index, 1 replica by document and 4832 shards
Woah. That's way too much. You must reduce the number of shards per index. Until your daily indexes reach a few tens of GB you shouldn't go beyond one shard per index.
I have resumed my indices they store logs from logstash.
Currently I recollect in the first index less than 1 gb by month, so I guess I can reshard from daily to monthly such index. And I suppose I can change from daily to monthly the creation of indices.
At the second index, we are currently storing only 1 month of logs (73000000 docs / 41 GB), I'm going to change to 1 shard on such index
My plan of such operation is:
Make snapshot of indices
Create one index with one shard by each month where we recollected data
Fill such index with daily index data, with elasticdump utility
Remove old daily indices
Do you think it is a good plan ? Would you have a better alternative? Would you recomend to skip monthly created indices and go with daily one ?
That looks like a reasonable plan. I generally prefer daily indexes because a) correcting mapping mistakes is much faster and b) you can clean up older indexes with a higher resolution.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.