I'm facing an issue in almost in all my Index when I'm trying to get data more than 1 day.
Below is the error screenshot , I'm not sure how to resolve this,any suggestion would be helpful. Note: I've done vanilla ELK installation in 1 server only,i.eElastic,kibana,logstash,filebeat,metricbeat installed in 1 sever.
Can someone plz also help if this is due to JVM heap size and if "yes",then kindly let me know what should be ideal jvm memory should be given to each components(elastic,logstash ect heap allocation) and how this can be configured?
Attached are the screen shot,any suggestion would be helpful.Not sure what data would be helpful here!
Couple Thought I am assuming the is a "Test/POC" architecture as we would not recommend running all those components on a single server unless they were containerized / isolated... otherwise they will all be noisy neighbors with each other competing from RAM and CPU etc
What is the specs of this Server?
I suspect Elasticsearch may be getting starved perhaps CPU and mostly likely RAM
Certainly looks like your Elasticsearch is may be memory constrained looks like you only gave 1 GB Heap and we recommend MAX of 20 Shards per 1 GB Heap ( I like 10 shards per 1GB Heap) and you have 86 Shards... 4x recommended.
I would make elasicsearch JVM 8GB Heap and start it first so it can claim the memory.
You also have unassigned shards .. .that is not good... you need to fix that.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.