My Elasticsearch data is keep getting corrupted and latest data will be not available. For this temporarily we are solving like if you delete the last date data then it allows logstash to send the data into elasticsearch and this not permanent solution. Please someone help me to solve the problem.
Bellow is the error:
2019-03-26T01:55:46,924][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[ppe-prod-primary-tomcat-2019.03.26][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[ppe-prod-primary-tomcat-2019.03.26][0]] containing [index {[ppe-prod-primary-tomcat-2019.03.26][ppe-prod-primary-tomcat][8-LHuGkBRv3HjZ_DqmqK], source[n/a, actual length: [16.5kb], max length: 2kb]}]]"})
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.