After installing an elk stack with all configs as default and creating our dashboards, we are experiencing 2 major issues:
elastic does not work while reaching 95% of disk space. Is there any fine tuning regarding disk usage?
we frequently observe errors like 3 out 24 shards failed. What could be the issues and how to solve it ?
Any other fine tuning of the elk is also welcome
Nothing works when the disk is 100% full, and 95% is almost 100%, so this is not totally unreasonable. Running out of disk space is a Very Bad Thing™ and the disk-based shard allocator is there to protect you from the consequences of that. You can adjust its settings if truly needed, but it's preferable to buy more disk space.
You will need to look at the Elasticsearch logs to find more information about this. If you need help interpreting them, please share them here and we'll do our best.
I will look through the logs and revert back.
Regarding disk space, is there any fine tuning regarding shards or indices which could help to reduce the consumption of elastic?
Yes, there are some tips for tuning for disk usage in the reference documentation.
Thanks a lot
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.