Indicies turn yellow from specific date - only 1% of Data is shown in Kibana

Hello Community

first of all - some infos about the cluster i'm using:

3 Nodes - 1 Master - 2 Data Nodes (All of them running CentOS 7)
Kibana Version 7.8
Elasticsearch Version 7.8
Logstash Version 7.8
Filebeat Version 7.8

The cluster has been working just fine for around 1 year. After my holidays - i noticed that the data which is visualized in Kibana was just about 1% of the usual income.
We get around 5 million Logs a day - now it's roughly 50.000. So i knew something was off.

I checked with the Network team first - but the firewalls were still sending all the logs.

There are no errors in the elasticsearch log itsself.

So i checked the indicies - and i noticed that every single index (there are 4 different indicies) turned yellow from the 25th of August. There are no indicators to why this happens.

I checked the storage on the servers - which is at 45% capacity.

I don't know what to check/do anymore -

can anyone help?

Thanks in advance

What is the output from the GET /_cluster/stats?human&pretty API?

I suspect your disk might be over 85% full, which would cause replica shards to not be allocated. What is your retention period? Are you deleting old data?

@warkolm @Christian_Dahlqvist
Sorry for not replying fast enough.

I did some research and @Christian_Dahlqvist was right - there were plenty of new devices delivering Logs to the cluster - which ended up taking up 99% of storage space.

I've now set up a more strict lifecycle policy and we're down to 70% again - with all indicies being green.

Thanks for the fast replys guys!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.