I am working with something in which I loading data (around 2.5 GB) into ES per day and then Delete once in a week .I want a device that whether it will good to load data daily into single index or load the data daily into each respective index( 7 index for 7 day and then deleting all once in a week).Kindly explain with the reason as I am new to ES.
I am able to load data into ES. I am asking about the approac so that I can't have dataloss in the case of failing due to overload in an index or shred.
At Present I am loading data into one node and one index and when I tried to load data today size of node became around 6.5 gb and after that I am getting error with cluster health read.Can you help me in that and also any way so that I can avoid these type of failure in future.
Initially It was set to default .So I now I changed it to 8GB by writing ES_HEAP_SIZE=8G and also locked the memory by setting bootstrap.mlockall: true in elasticsearch.yml file running on Window Server 2012 R2. Is it Correct. Suggest some points.
Thanks for all the information. Can you please tell where to set all these value as I am getting error When I am setting these value in elasticsearch.yml file of config folder. I have 16GB of RAM.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.