Only 2 of 3 ES data drives are being utilized


(Logan Hardy) #1

I'm testing elasticsearch 0.19.8 on a 8 node cluster. Each node has three
physical drives but only two of the three drives have a significant amount
of data being written to them even though all three are specified by
path.data: in elasticsearch.yml. /hadoopdata2/elasticsearch and
/hadoopdata3/elasticsearch are being written to as
expected. /hadoopdata1/elasticsearch does not seem to have anything
substantial written to the indices directories. I'm sure I'm just missing
something basic, any ideas? See examples below. FYI I'm using
a decommissioned HBase cluster for our tests in case you were wondering
about the directories I'm using for ES data.

from elasticsearch.yml
path.data:
/hadoopdata1/elasticsearch,/hadoopdata2/elasticsearch,/hadoopdata3/elasticsearch

[root@HDEBS1 0]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 95G 11G 79G 13% /
/dev/sda6 1.2T 81G 1.1T 7% /hadoopdata1
/dev/sdb1 1.4T 132G 1.2T 11% /hadoopdata2
/dev/sdc1 1.4T 154G 1.2T 12% /hadoopdata3
/dev/sda3 48G 190M 45G 1% /tmp
/dev/sda1 99M 24M 71M 25% /boot
tmpfs 5.9G 0 5.9G 0% /dev/shm

[root@HDEBS1 config]# du -sh /hadoopdata*/elasticsearch
2.1M /hadoopdata1/elasticsearch
54G /hadoopdata2/elasticsearch
73G /hadoopdata3/elasticsearch

[root@HDEBS1 0]# du -sh /hadoopdata*/elasticsearch/hdebes/nodes/0/indices/*
36K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/foo
276K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_20120819
204K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_20120822
372K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_20120823
372K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_20120824
396K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_20120825
12K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_test
60K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_test2
32K /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/foo
4.0G /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_20120819
7.2G /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_20120822
15G /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_20120823
12G /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_20120824
17G /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_20120825
12K /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_test
52K /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_test2
40K /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/foo
18G /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_20120819
9.9G /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_20120822
15G /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_20120823
15G /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_20120824
16G /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_20120825
12K /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_test
64K /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_test2

Any help at all would be very appreciated.

Cheers,
Logan

--


(Logan Hardy) #2

Has anyone actually tried using 3 drives per node?

Logan

On Monday, August 27, 2012 2:42:44 PM UTC-6, Logan Hardy wrote:

I'm testing elasticsearch 0.19.8 on a 8 node cluster. Each node has three
physical drives but only two of the three drives have a significant amount
of data being written to them even though all three are specified by
path.data: in elasticsearch.yml. /hadoopdata2/elasticsearch and
/hadoopdata3/elasticsearch are being written to as
expected. /hadoopdata1/elasticsearch does not seem to have anything
substantial written to the indices directories. I'm sure I'm just missing
something basic, any ideas? See examples below. FYI I'm using
a decommissioned HBase cluster for our tests in case you were wondering
about the directories I'm using for ES data.

from elasticsearch.yml
path.data:
/hadoopdata1/elasticsearch,/hadoopdata2/elasticsearch,/hadoopdata3/elasticsearch

[root@HDEBS1 0]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 95G 11G 79G 13% /
/dev/sda6 1.2T 81G 1.1T 7% /hadoopdata1
/dev/sdb1 1.4T 132G 1.2T 11% /hadoopdata2
/dev/sdc1 1.4T 154G 1.2T 12% /hadoopdata3
/dev/sda3 48G 190M 45G 1% /tmp
/dev/sda1 99M 24M 71M 25% /boot
tmpfs 5.9G 0 5.9G 0% /dev/shm

[root@HDEBS1 config]# du -sh /hadoopdata*/elasticsearch
2.1M /hadoopdata1/elasticsearch
54G /hadoopdata2/elasticsearch
73G /hadoopdata3/elasticsearch

[root@HDEBS1 0]# du -sh /hadoopdata*/elasticsearch/hdebes/nodes/0/indices/*
36K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/foo
276K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_20120819
204K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_20120822
372K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_20120823
372K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_20120824
396K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_20120825
12K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_test
60K /hadoopdata1/elasticsearch/hdebes/nodes/0/indices/users_test2
32K /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/foo
4.0G /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_20120819
7.2G /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_20120822
15G /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_20120823
12G /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_20120824
17G /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_20120825
12K /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_test
52K /hadoopdata2/elasticsearch/hdebes/nodes/0/indices/users_test2
40K /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/foo
18G /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_20120819
9.9G /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_20120822
15G /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_20120823
15G /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_20120824
16G /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_20120825
12K /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_test
64K /hadoopdata3/elasticsearch/hdebes/nodes/0/indices/users_test2

Any help at all would be very appreciated.

Cheers,
Logan

--


(system) #3