Unfortunately, my elasticsearch stopped working. and I'm not sure what to do. Please have a look at my logs
org.elasticsearch.action.NoShardAvailableActionException: No shard available for [get [.kibana][doc][config:6.2.3]: routing [null]]
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
[ERROR][o.e.x.w.e.ExecutionService] [EC2AMAZ-1763048] failed to update watch record [ArTI0TAdSnGq1KU4pC80rA_logstash_version_mismatch_24a9faf8-c860-4201-9618-e848f723cbda-2018-04-18T09:41:49.266Z]
org.elasticsearch.ElasticsearchTimeoutException: java.util.concurrent.TimeoutException: Timeout waiting for task.
This means that your JVM does not have enough memory to continue, you need to increase the heap size, decrease the usage, or add more nodes to your cluster so that the load can be spread around.
Once the server is up I would also recommend checking the /_nodes/stats API endpoint to try and determine what is using the heap memory, so you can adjust accordingly.c
I think in my jvm.options heap memory is 4GB and you are suggesting me to reduce it to 1GB. The error I got is Heap out of memory. Quite confusing! Is my understanding correct?
You got an OOM. Increase your Heap size from 4 to 6 and see how it handles the results, if the same then increase to 8GB for min and max. Please note to ensure you have the same RAM available for the OS and its file handles.
Thank you much for your suggestion @JKhondhu. Anyway, My system RAM is 8GB. So as per blog post you have suggested I shouldn't go beyond 4GB of maximum heap size right?
Sure Thanks @JKhondhu. Not sure about the os logs : ( . If possible please let me know how to debug.
and one more thing I have doubt is
When I created Indices based on date I'm ending up with all the issues which is stopping elasticsearch. but when I create only one index no issues and everything is fine? Please do suggest me why is this happening?
You will need to be reviewing the elasticsearch.log before during and after you saw this fatal logging: fatal error in thread [elasticsearch[EC2AMAZ-1763048][bulk][T#2]], exiting java.lang.OutOfMemoryError: Java heap space.
How are you creating daily indices? Logstash?
How is data being indexed?
What troubleshooting have you done up until now?
I don't see why daily indices would be causing issues here unless your cluster is relatively too small to be coping with the amount of data you wish to onboard?
Yes I'm using Logstash for creating the indices and I'm data is being indexed based on time stamp.
Till now, I tried with different Heap memory settings frequently checking the logs. but that didn't help me anyway.
when I created multiple indices based on timestamp many shards created which I think caused OOM.
what helped me is creating a single index which ended up with creating very few shards comparatively.
Still Not sure why many shards are creating when I create multiple indices.
You can modify this in your mappings or templates. This just may be the case of ensuring all your new indices are of a lower number of shards for example. 1P, 0R.
Perhaps you need to know your retention period and delete indices older than X days old so your small cluster can cope - or grow it out, vertically or horizontally to fit your needs!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.