Disk Capacity

I have 3 nodes of Elasticsearch. The master node has 20 GB, the others have 12 GB. I check the path to store data of node 3, it shows full capacity. Can I check why node 3 is full? Is there any option for me to solve the problem without delete all data in node3?

I try to send the logs to the cluster but it accepts only parts of them.

If you are using monitoring, then do all nodes have the same number of shards on them?

Currently, I can't access the monitor on the kibana, it show internal error 500. I know it via linux command to check disk space.

That is very little disk space for the data nodes. Why so little? What are you expecting to use the cluster for?

Ok what about the _cat APIs? What does a df -h show on the nodes>

1 Like

I'm not sure how should I estimate the disk space, that why it has around 50 GB. This cluster is used for storing Metricbeat from 10 PC and custom logs from some applications to show it on the dashboard.

Each node same nuber of shard and df -h is 100% on the path the collect Elastic data.
Now in my cluster, it shows only 2 nodes. I already try curl -XDELETE to clear the big index, but I can't start service on node 3.

A bit off topic, but are you able to share the error message from the kibana log file when you attempt to visit the monitoring ui? Also, what version of the stack are you using?

I use stack version 7.4.2. I didn't set up log for Kibana. When I access UI, it shows 500 internal servers error. After I use curl to delete some unused index, my cluster is working now.
The problem is node 3 is disconnected from the cluster, so the disk still 100% full, and I can't start the service.
Is there any way to recover node3 to join the cluster?

I use lvextend to increase disk size 100M and now it working! I don't know If there is other way or not, but it need only small free space to restart the service

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.