I have 3 nodes of Elasticsearch. The master node has 20 GB, the others have 12 GB. I check the path to store data of node 3, it shows full capacity. Can I check why node 3 is full?Is there any option for me to solve the problem without delete all data in node3?
I try to send the logs to the cluster but it accepts only parts of them.
I'm not sure how should I estimate the disk space, that why it has around 50 GB. This cluster is used for storing Metricbeat from 10 PC and custom logs from some applications to show it on the dashboard.
Each node same nuber of shard and df -h is 100% on the path the collect Elastic data.
Now in my cluster, it shows only 2 nodes. I already try curl -XDELETE to clear the big index, but I can't start service on node 3.
A bit off topic, but are you able to share the error message from the kibana log file when you attempt to visit the monitoring ui? Also, what version of the stack are you using?
I use stack version 7.4.2. I didn't set up log for Kibana. When I access UI, it shows 500 internal servers error. After I use curl to delete some unused index, my cluster is working now.
The problem is node 3 is disconnected from the cluster, so the disk still 100% full, and I can't start the service.
Is there any way to recover node3 to join the cluster?
I use lvextend to increase disk size 100M and now it working! I don't know If there is other way or not, but it need only small free space to restart the service
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.