That's a message from the Java garbage collector. You can ignore it. (I am surprised you have MaxTenuringThreshold set. You need to be in a world of hurt regarding GC before you resort to tuning tenuring!)
The message we want would have been in an earlier log. I suggest you follow the instructions for recovering from a disk space issue and see if that resolves this.
PUT /totalexecution-2019/_settings
{
"index.blocks.read_only_allow_delete": null
} things started working .
but still there are some questions
surely this index lock was done automatically by elastic , so how to define a new size . because removing / deleting the index lock is not a good solution .
once this lock is removed and i re-ran elastic service , its started creating indices , when i say "creating indices" , its not just total execution index but other indices as well.
in other words the above "api" command solved the problem for all the indices, but as far as the error message in logstash went the lock was on "total execution" index alone . so why other indices did not get created before this command was executed.
my file beats is not generating any log , and still beats is generating some error when running from command line (the same error which i mentioned in this post 2-3 days before. i created a separate post for this post created for file beats issue . have a when u get time.
above three points are open questions even after solving the actual problem for which this thread is opened.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.