I am facing a very strange problem. I am crawling some data and saving it form time to time but my nodes in the cluster are deleted automatically as i am saving. I checked log files it only had two lines in it
[2017-01-19 15:18:34,201][INFO ][cluster.metadata ] [Terror] [please_read] creating index, cause [api], templates [], shards [5]/[1], mappings []
[2017-01-19 15:18:34,372][INFO ][cluster.routing.allocation] [Terror] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[please_read][0], [please_read][4], [please_read][4], [please_read][0]] ...]).
it deletes all the nodes randomly at anytime . is it possible that it might happen because of low server config ? as i am using 4gb of ram only and the crawling process includes insert and update of data in every 30 -40 min
As I said you should read the topics I linked to and the blog posts they contain and you should reconfigured your Elasticsearch cluster so it is not accessible from the open internet
Is it possible that it might happen because of low server config ? as i am using 4gb of ram only and the crawling process includes insert and update of data in every 30 -40 min
Have you read the links I provided? They explain that this is a result of a malicious attack on your cluster. You need to read the links and follow their advice.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.