ES does not write on all disks


#1

Hi,
I have ES 2.2.0 installés on one node cluster. The node has 12 cpu, 96 GB RAM and 2 partition disks of 500 GB each. I loaded up to 2 billions documents in an index. I wanted to extend the storage so I added 4 more disks of 500GB each.
After restarting the system I realised that ES is not writing data to the new disks although it created reference files to the index.
How can I make ES write to these new disks because I'm running out of space on the first disks?


(Thomas Decaux) #2

Why a single node? With 96 GB RAM you should create at least 4 data nodes.

Did you specify disk paths on elasticsearch.yml config?


#3

Hi Thomas,
My goal is to build a cluster of 96 GB for each node. I'm just testing the first node before adding others. I've added all the new paths to yaml file. That is why ES created the index references on thest new disks


(Jimferenczi) #4

All of the files from a single shard will be written to the same path. This means that once your index is created on that node and that each shard is assigned then the same path will be used for all operations related to this index. If you want es to use the other data path you'll need to create a new index. What is your use case ? Did you consider using time based indices (one per day/week/month) ?


#5

Hi Jim,
My main UC is to give fast access to our front office in order to solve customer's disputes. We produce around 1 billion transactions per day and need to store up to 183 days data online.
So do you have tutorial or example of creating time based indices?
Thank you


(Jimferenczi) #6

Sure, you can start here https://www.elastic.co/guide/en/elasticsearch/guide/current/time-based.html
You should consider using logstash (https://www.elastic.co/products/logstash) which has all the logics to handle time based indices natively.


(system) #7