I was very happy playing with my filebeat / netflow platform untill today.
There was nothing to see and founded following message at log file:
high disk watermark [90%] exceeded on [NX79WFORStGfAdCq26XLaw][ubuntu-elk][/var/lib/elasticsearch/nodes/0] free: 11.8gb[8.1%], shards will be relocated away fr om this node; currently relocating away shards totalling  bytes; the node is expected to continue to exceed the high disk watermark when these relocations are complete
An it was true ... disk capacity was at 90%.
After checked my indexes I saw 3 indexes created by day about 50G each.
So I deleted them doing:
After that I saw at elasticsearch log file:
ow disk watermark [85%] no longer exceeded on [NX79WFORStGfAdCq26XLaw][ubuntu-elk][/var/lib/elasticsearch/nodes/0] free: 111.3gb[76.2%]
So ... I thought It would work again.
I restarted filebeat and elasticsearch but now I have this at kibana:
search_phase_execution_exception all shards failed
It seems I did not remove indexes properly and now database is broken.
After that I looked and removed unsassigned shards.
Now it is working again.
Im using single node setting since im learning and this is not production enviroment.
Filebeat/netflow module stores 50Gb per day ... so I have 3 days to full fill my disk again
How can I prevent indexes grow until full fill my disk ? Is there same way of "rotating" file , lets say to keep last two days data.
I would like to avoid using a second node/cluster if possible.
What is propper way to get rid of unussed data ?