Increase disk allocation for Elastic Search on linux

We are trying to index large datasets to elastic search and indexing is stopped due to watermark reached and nodes are set to read-only.

We ran the command

GET /_cat/allocation?v

and from the output, we came to know that the disk space allocated for elastic is 10Gb and 95% is occupied. We have some more free space on our machine that can be allocated to elastic.

We are trying to figure out how to increase the space allocation to elastic search. Any pointers would be helpful.

This really depends on your Linux setup and the file systems in use, the cloud or drives in use, etc. Having only 10GB is very small, but maybe some default root file system was setup that way - typically you'd see very large (like 1TB+) root disks or a 2nd mounted disk like that, mounted for the ES data directory - but these are no really Elasticsearch questions; all about Linux storage.

We do have space on the disk available... we tried running free -g command and we see around 15gb available... We are unable to figure out how to tell elastic to use that sounds trivial but we are trying to understand where elastic search saves data and how to how it calculates the available space and is trying to tell Elastic Search to use the space available instead of restricting itself to 10GB

You can see in the config where it stores the data, such as /var/lib/elasticsearch or /var/lib/elasticsearch/data

Your node stats will show you, such as: GET /_nodes/stats, which for a cluster I'm looking at shows:
"data" : [
{
"path" : "/var/lib/elasticsearch/nodes/0",
"mount" : "/var/lib/elasticsearch (/dev/nvme1n1p1)",
"type" : "xfs",
"total_in_bytes" : 1073216491520,
"free_in_bytes" : 432963375104,
"available_in_bytes" : 432963375104

What is your 'df -h ' output? You can then see your mounted file systems, space, etc.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.