The Logastash loggers are throwing below errors continuously.
"retrying failed action with response code: 503", :level=>:warn}
{:timestamp=>"2018-07-30T16:58:20.262000+0530", :message=>"too many attempts at sending event. dropping:
The Elastic-search loggers doesn't have the any errors.
Logstash - 1.7.0 version
Elastic search 1.6.0 version
As per my knowledge few indices are in RED state and internally those indices few shards are in UNASSIGNED state.Further i have not much knowledge to debug.
You have far too many shards for that size of heap and need to reduce that significantly or scale the cluster up/out in order to get a stable cluster. As you are on such an old version this will require you to delete and/or reindex data if you can not increase resources.
Even if you add resources you probably need to change your sharding policy and reduce the shard count substantially. having so many small shards is very inefficient. I do not have any recommendation about how much resources to add as I have not used this version in years, But adding another data node of the same size while setting the number of replicas to 0 might give you enough headroom to restructure your shards.
Thank you sir,
These actions are going to perform and let you know.
Reduce the shards/index from 3 to 1
Delete the the index of 3 months older
Replica size to zero
Still Instead of adding one more node, I hope some increase resource might help.Suggest on this irrespective of version.Its not a problem and a kind of learning for us.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.