Hello! I have a strange issue in my elasticsearch cluster
So I have 3 nodes, and each node server has 300gb disk space on it
But it showed that only 20gb is available.
How I can add more space to the nodes?
Hello! I have a strange issue in my elasticsearch cluster
So I have 3 nodes, and each node server has 300gb disk space on it
But it showed that only 20gb is available.
How I can add more space to the nodes?
Hello There!
Have you checked it by FS information?
After executing this command:
root@master-node:/# df -H /var/lib/elasticsearch/
I get this output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv 21G 18G 2.6G 88% /
Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@master-node:/var/lib/elasticsearch#
Your /
volume is 21GB so it looks OK what ES shows.
Is there a possibility of allowing more than 20gb?
Could you please show results of df -h
without any parameters? Let's see what you have got there. By the way - is it virtual machine?
Filesystem Size Used Avail Use% Mounted on
udev 5.9G 0 5.9G 0% /dev
tmpfs 1.2G 1.3M 1.2G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 20G 17G 2.3G 88% /
tmpfs 5.9G 0 5.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup
/dev/sda2 974M 205M 702M 23% /boot
/dev/loop0 71M 71M 0 100% /snap/lxd/21029
/dev/loop2 56M 56M 0 100% /snap/core18/2128
/dev/loop3 47M 47M 0 100% /snap/snapd/16292
/dev/loop4 56M 56M 0 100% /snap/core18/2538
/dev/loop5 68M 68M 0 100% /snap/lxd/22753
tmpfs 1.2G 0 1.2G 0% /run/user/0
/dev/loop6 62M 62M 0 100% /snap/core20/1593
/dev/loop7 62M 62M 0 100% /snap/core20/1611
Yes, It's a VM.
root@master-node:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 70.3M 1 loop /snap/lxd/21029
loop1 7:1 0 62M 1 loop
loop2 7:2 0 55.4M 1 loop /snap/core18/2128
loop3 7:3 0 47M 1 loop /snap/snapd/16292
loop4 7:4 0 55.6M 1 loop /snap/core18/2538
loop5 7:5 0 67.8M 1 loop /snap/lxd/22753
loop6 7:6 0 62M 1 loop /snap/core20/1593
loop7 7:7 0 62M 1 loop /snap/core20/1611
sda 8:0 0 300G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 34G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 20G 0 lvm /
sr0 11:0 1 1.1G 0 rom
I don't get it. Who installed this for you?
This is how it looks at my machines:
# df -h
System plików rozm. użyte dost. %uż. zamont. na
udev 3,9G 0 3,9G 0% /dev
tmpfs 796M 520K 796M 1% /run
/dev/sdc 20G 9,3G 9,3G 50% /
tmpfs 3,9G 0 3,9G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
/dev/sdb 9,8G 2,7G 6,6G 29% /var/log
/dev/sda 30G 24K 28G 1% /opt
{{FQDN}}:/var/nfs/elasticsearch 20G 3,2G 16G 18% /var/nfs/elasticsearch
tmpfs 796M 0 796M 0% /run/user/1001
and:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 30G 0 disk /opt
sdb 8:16 0 10G 0 disk /var/log
sdc 8:32 0 20G 0 disk /
sr0 11:0 1 785,7M 0 rom
For this machine I use Debian 11 + LVM.
Same here, it seems weird.
I deployed this VM (via Hyper-V) using a template.
Is it your test machine or production cluster?
It's a test machine
I don't know how it's in Hyper-V but I would try to resize volume. Is Hyper-V says you got free space (or not allocated) left?
No, I didn't found such information.
I find this article, explaining how to extend the volume:
Thaank you so much!
As this is you test cluster you can even burn it down
But please be very careful and patient as doing this on your production cluster can cause lose of data.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.