Red status after putting data

Hi guys,

I started up Elasticsearch 2.1 successfully,and setting path.data to the folder which mount on hadoop through nfs.

When I excute 'curl -XGET 'http://server-a1:9200/_cluster/health?level=indices&pretty=true' ',and it shows something like

{
"cluster_name" : "Elasticsearch-YARN-AA",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0,
"indices" : { }
}

but when I excute 'curl -XPUT 'http://server-a1:9200/test1/user/stana' -d '{"name":"stana"}' ', it shows error message:

[2015-12-11 14:42:58,469][INFO ][http ] [esnode-server-a1] publish_address {192.168.1.221:9200}, bound_addresses {192.168.1.221:9200}, {[fe80::ec4:7aff:fe42:3c31]:9200}
[2015-12-11 14:42:58,470][INFO ][node ] [esnode-server-a1] started
[2015-12-11 14:42:58,922][INFO ][gateway ] [esnode-server-a1] recovered [0] indices into cluster_state
[2015-12-11 14:43:37,410][INFO ][cluster.metadata ] [esnode-server-a1] [test1] creating index, cause [auto(index api)], templates [], shards [5]/[1], mappings [user]
[2015-12-11 14:43:39,692][WARN ][indices.cluster ] [esnode-server-a1] [[test1][1]] marking and sending shard failed due to [failed recovery]
[test1][[test1][1]] IndexShardRecoveryException[failed recovery]; nested: TranslogException[failed to create new translog file]; nested: IOException[不適用的引數];
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:179)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: [test1][[test1][1]] TranslogException[failed to create new translog file]; nested: IOException[不適用的引數];
at org.elasticsearch.index.translog.Translog.createWriter(Translog.java:453)
at org.elasticsearch.index.translog.Translog.(Translog.java:180)
at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:209)
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:152)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1408)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1403)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:906)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:883)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
... 3 more
Caused by: java.io.IOException: 不適用的引數
at sun.nio.ch.FileDispatcherImpl.force0(Native Method)
at sun.nio.ch.FileDispatcherImpl.force(FileDispatcherImpl.java:76)
at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:376)
at org.elasticsearch.index.translog.Checkpoint.write(Checkpoint.java:90)
at org.elasticsearch.index.translog.TranslogWriter.writeCheckpoint(TranslogWriter.java:289)
at org.elasticsearch.index.translog.TranslogWriter.create(TranslogWriter.java:80)
at org.elasticsearch.index.translog.Translog.createWriter(Translog.java:451)
... 14 more

and cluster status became red:

{
"cluster_name" : "Elasticsearch-YARN-AA",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 4,
"number_of_data_nodes" : 4,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 10,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 0.0,
"indices" : {
"test1" : {
"status" : "red",
"number_of_shards" : 5,
"number_of_replicas" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 10
}
}
}

How can I do to reslove it?

Could you translate 不適用的引數 for us?

In the first cluster status output you have one data node but in the next one you suddenly have four nodes. Is that expected?

Don't mount path.data over NFS!
Don't share indices in Hadoop cluster!

1 Like

Hi magnusbaeck

Thanks for your reply.
'不適用的引數' is something like 'Invalid argument'.

I started up Elasticsearch 2.1 two times with Elasticsearch-hadoop.
At first, I started up Elasticsearch with one container,and I started up another with three containers in the few minutes.
Total of four nodes.

It works well on Elasticsearch 1.7 with the same situation (setting path.data to the folder which mount on hadoop through nfs. ).
I could put data and get them.

What can I do to solve the error on 2.1?

Hi dadoonet

Thanks for your reply.

For some reason, I have to mount path.data over NFS and share indices in Hadoop cluster.

I am a beginner and I want to know why don't mount path.data over NFS?
Could you explain?Thank you.