in my Logstash Logs and data doesnt reach the Elasticsearch instance
On every Index that this occurs on is a index write block which i have to manually set to false every day or every couple hours and after that the Logs get handled as usual.
And even when i set the Block to false in my Index Template it still gets overriden.
I have more than enough Storage cause that is the Explanation there is in any discussion i found on this Topic.
My Elasticsearch version is 7.17.15 and my logstash version 7.9.3.
Check your cluster settings to see if the cluster.routing.allocation.disk.watermark.low and cluster.routing.allocation.disk.watermark.high values are set appropriately.
Hello, i get this Info tho in my Elasticsearch Logs
2024-06-26T02:51:48,958][INFO ][o.e.x.i.IndexLifecycleTransition] [Node1] moving index [Logs] from [{"phase":"warm","action":"forcemerge","name":"readonly"}] to [{"phase":"warm","action":"forcemerge","name":"forcemerge"}] in policy [Hot_warm]
It says in the message: name readonly
But in my ILM or Templates i nowhere define a read only phase.
I know you can check the Box but i checked and its unchecked.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.