3 x master node 8GB 2vCPU
3X data note 30GB 8vCPU
I am recovering a cluster and I am getting this error
2015-11-18 02:45:20,374][WARN ][action.bulk ] [Bruiser] failed to perform indices:data/write/bulk[s] on remote replica [Douglas Birely][qTqgsv_STVG3je5Fn7tEeg][zupme-1b-elasticsearch003.aws.zup.com.br][inet[/***]][events-vivo-2-20151118][1] org.elasticsearch.transport.RemoteTransportException: [Douglas Birely][inet[/*****]][indices:data/write/bulk[s][r]] Caused by: org.elasticsearch.index.engine.CreateFailedEngineException: [events-vivo-2-20151118][1] Create failed for [events#AVEY6UeqCpZT8FxkSXyC] at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:264) at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:483) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnReplica(TransportShardBulkAction.java:569) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicaOperationTransportHandler.messageReceived(TransportShardReplicationOperationAction.java:250) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicaOperationTransportHandler.messageReceived(TransportShardReplicationOperationAction.java:229) at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.doRun(MessageChannelHandler.java:279) at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.FileNotFoundException: /data/elasticsearch/zupme/nodes/0/indices/events-vivo-2-20151118/1/index/_gb.fdt (Too many open files) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(FileOutputStream.java:270) at java.io.FileOutputStream.<init>(FileOutputStream.java:213) at java.io.FileOutputStream.<init>(FileOutputStream.java:162) .....
File descriptor are set to high values.
{ "cluster_name" : "zupme", "nodes" : { "BnoAILz0Q3KQjSWtE2KNKw" : { "name" : "Sebastian Shaw", "transport_address" : "inet[*****]", "host" : "*****", "ip" : "****", "version" : "1.7.2", "build" : "e43676b", "http_address" : "inet[****]", "attributes" : { "master" : "false" }, "process" : { "refresh_interval_in_millis" : 1000, "id" : 18701, "max_file_descriptors" : 131072, "mlockall" : true } }, "o__dvgL7QfyIM-jRwlJlHg" : { "name" : "Milan", "transport_address" : "inet[****]", "host" : "*****", "ip" : "*****", "version" : "1.7.2", "build" : "e43676b", "http_address" : "inet[******]", "attributes" : { "data" : "false", "master" : "true" }, "process" : { "refresh_interval_in_millis" : 1000, "id" : 16397, "max_file_descriptors" : 65536, "mlockall" : true } }, "C11qTS23R5aX2t6TTSCGSA" : { "name" : "Seeker", "transport_address" : "inet[*****]", "host" : "*****", "ip" : "*****", "version" : "1.7.2", "build" : "e43676b", "http_address" : "inet[****]", "attributes" : { "master" : "false" }, "process" : { "refresh_interval_in_millis" : 1000, "id" : 7885, "max_file_descriptors" : 131072, "mlockall" : true } }, "o_5zCidLQpiXsjJGsBPbXw" : { "name" : "Phantom Eagle", "transport_address" : "inet[*****]", "host" : "*****", "ip" : "*****", "version" : "1.7.2", "build" : "e43676b", "http_address" : "inet[****]", "attributes" : { "data" : "false", "master" : "true" }, "process" : { "refresh_interval_in_millis" : 1000, "id" : 16777, "max_file_descriptors" : 65536, "mlockall" : true } }, "QbbbxsqmTlWpSRtWzdIhgg" : { "name" : "Lilith, the Daughter of Dracula ", "transport_address" : "inet[****]", "host" : "****", "ip" : "****", "version" : "1.7.2", "build" : "e43676b", "http_address" : "inet[****]", "attributes" : { "master" : "false" }, "process" : { "refresh_interval_in_millis" : 1000, "id" : 25335, "max_file_descriptors" : 131072, "mlockall" : true } } } }
If I have too many indices, but these indices is not beeing use now, neither for search nor index, It remains file descriptors open?