Convert a standaline Elasticsearch node to a Cluster Master and add more nodes

have a fully working and tuned Elasticsearch host which also runs logstash on the same host, say Host-A. This however is a standalone ES host where the data files are being ingested using logstash to elasticsearch and I have kibana as the front end. The total ELK Stack and a single machine.

The host-A is a 32 core, 512GB ram with 3TB SSD harddrive. I optmized JVM, input throttles etc and currently logstash is ingesting about 1 Billion records at the rate of 25 Million documents per hour. (indexing rate at about ~6500/s average).

However, I noticed that although I have added 32 worker threads for the logstash instance, the data is not injested any faster.

I have two more hosts with the same config which I can add into the cluster. But for my scenario I really could not find the configuration.

Can I have the existing standalone host-A and convert it to Master+Data node? Host-A already indexed 500 Million records in the last two days.

So basically:

Host-A : ClusterA: NodeA - Master:True, Data:True
Host-B : ClusterA: NodeB - Master:False, Data:True
Host-C : ClusterA: NodeC - Master:False, Data:True.

Is the above a valid configuration? How about logstash? Theres only one instance of logstash running on the master, from where it is also injesting the log files.

Will the data be distributed across the NodeB and NodeC? How does that work? Do I have to allocate shards or it will be taken care when I add the other hosts to cluster?

Does JVM needs to be optimized the same way as for secondary nodes? How is the processing distributed in this case?

Valid but not recommended, because you risk a split brain - Important Configuration Changes | Elasticsearch: The Definitive Guide [2.x] | Elastic

Elasticsearch handles that for you.

What do you mean? We do not recommend, and you won't find anyone here that will help you, changing JVM settings/flags.

If you have more than 1 shard per index, then the processing will be spread across the N nodes that hold those shards.

Thankyou @warkolm.

I now ended up in a situation where the HostA ran out of disk and elastic search refuses to start up.
There are 3 disks, hdd, sde, sdf each 1TB. Only the first one is used, although I have configured path data as:

path.data:

  • /hdd/lib/elasticsearch/
  • /sde/lib/elasticsearch/
  • /sdf/lib/elasticsearch/

Is there any other config I have to apply for it to work?

Elasticsearch won't automatically rebalance if you add new path.data, you will need to move shards off and then back for that to happen.

Thankyou @warkolm for the suggestion.

Do you mean something like this? : https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-reroute.html

I actually am unable to start elasticsearch due to this. How do I start it before moving the shards?

It should start, what does the log show?

Heres a snippet.

Caused by: java.io.IOException: No space left on device
at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:?]
at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60) ~[?:?]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[?:?]
at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[?:?]
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211) ~[?:?]
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) ~[?:1.8.0_131]
at java.nio.channels.Channels.writeFully(Channels.java:101) ~[?:1.8.0_131]
at java.nio.channels.Channels.access$000(Channels.java:61) ~[?:1.8.0_131]
at java.nio.channels.Channels$1.write(Channels.java:174) ~[?:1.8.0_131]
at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73) ~[?:1.8.0_131]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[?:1.8.0_131]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[?:1.8.0_131]
at org.apache.lucene.store.OutputStreamIndexOutput.getChecksum(OutputStreamIndexOutput.java:80) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
at org.apache.lucene.codecs.CodecUtil.writeCRC(CodecUtil.java:548) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
at org.apache.lucene.codecs.CodecUtil.writeFooter(CodecUtil.java:393) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
at org.elasticsearch.gateway.MetaDataStateFormat.write(MetaDataStateFormat.java:140) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.env.NodeEnvironment.loadOrCreateNodeMetaData(NodeEnvironment.java:419) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:263) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.node.Node.(Node.java:264) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.node.Node.(Node.java:244) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:232) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:232) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:351) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.5.1.jar:5.5.1]
... 6 more
Suppressed: java.io.IOException: No space left on device
at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:?]
at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60) ~[?:?]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[?:?]
at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[?:?]
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211) ~[?:?]
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) ~[?:1.8.0_131]
at java.nio.channels.Channels.writeFully(Channels.java:101) ~[?:1.8.0_131]
at java.nio.channels.Channels.access$000(Channels.java:61) ~[?:1.8.0_131]
at java.nio.channels.Channels$1.write(Channels.java:174) ~[?:1.8.0_131]
at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73) ~[?:1.8.0_131]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[?:1.8.0_131]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[?:1.8.0_131]
at org.apache.lucene.store.OutputStreamIndexOutput.close(OutputStreamIndexOutput.java:68) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
at org.elasticsearch.gateway.MetaDataStateFormat.write(MetaDataStateFormat.java:141) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.env.NodeEnvironment.loadOrCreateNodeMetaData(NodeEnvironment.java:419) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:263) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.node.Node.(Node.java:264) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.node.Node.(Node.java:244) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:232) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:232) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:351) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.5.1.jar:5.5.1]

java.lang.IllegalStateException: Failed to create node environment
at org.elasticsearch.node.Node.(Node.java:267) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.node.Node.(Node.java:244) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:232) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:232) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:351) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.cli.Command.main(Command.java:88) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) [elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) [elasticsearch-5.5.1.jar:5.5.1]
Caused by: java.io.IOException: No space left on device
at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:?]
at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:60) ~[?:?]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[?:?]
at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[?:?]
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:211) ~[?:?]
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78) ~[?:1.8.0_131]
at java.nio.channels.Channels.writeFully(Channels.java:101) ~[?:1.8.0_131]
at java.nio.channels.Channels.access$000(Channels.java:61) ~[?:1.8.0_131]
at java.nio.channels.Channels$1.write(Channels.java:174) ~[?:1.8.0_131]
at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73) ~[?:1.8.0_131]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[?:1.8.0_131]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[?:1.8.0_131]
at org.apache.lucene.store.OutputStreamIndexOutput.getChecksum(OutputStreamIndexOutput.java:80) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
at org.apache.lucene.codecs.CodecUtil.writeCRC(CodecUtil.java:548) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
at org.apache.lucene.codecs.CodecUtil.writeFooter(CodecUtil.java:393) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
at org.elasticsearch.gateway.MetaDataStateFormat.write(MetaDataStateFormat.java:140) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.env.NodeEnvironment.loadOrCreateNodeMetaData(NodeEnvironment.java:419) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:263) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.node.Node.(Node.java:264) ~[elasticsearch-5.5.1.jar:5.5.1]

@warkolm , also the below link talks about rerouting between nodes.

How can I move shards between disks in the same host? I couldnt find an answer searching...

Are the other disks larger than the original one?

No, they are all the same size, 1 TB each.

/dev/sdb 922846420 875945412 0 100% /hdd
/dev/sde 922846420 213628 875731784 1% /sde
/dev/sdf 922846420 80124 875865288 1% /sdf

Damn. If you had a bigger disk you could move the old data directory to it and it'd work ok.

You may be able to move some index directories aside, but that's fraught with danger and data loss potential and I wouldn't recommend it.

Oh :frowning:

So whats the recomended setting? Is it that if I had all three disks before indexing it would have worked just fine?
And is it that after indexing if I add new disks elastic doesnt like it?

@warkolm, assuming I fix this disk space issue, maybe I ll try cleaning up something else on that disk, how can I move shards to other disk, is there a API for that?

I will bring up other three nodes in a cluster, Is it required to have identical disk space across all the data nodes?

Btw, thanks for all your pointers and I am new to elasticsearch, and learning a lot in the process.

There is nothing for that.

Ideally, yes. But Elasticsearch can deal with it if you don't.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.