Error about too many open files when allocating shard

I opened a closed index with 5 shards. All of the shards allocated fine except 1. When I check the logs I see an error about too many open files.

[WARN ][o.e.c.a.s.ShardStateAction] [bos1-es2] [px-web-server-2018.03.06][4] received shard failed for shard id [[px-web-server-2018.03.06][4]], allocation id [tVvJ\
ZJD4SGO2YVMiaA3K5g], primary term [0], message [failed recovery], failure [RecoveryFailedException[[px-web-server-2018.03.06][4]: Recovery failed on {bos1-es2}{8SjCHI-OQn-mBAl0OBLx6Q}{G0zKtNGHQjuWrhWcapfsHQ}{}{}{ml.machine_memory=25147817984, ml.max_open_jobs=20, ml.enabled=true}]; nested: IndexShardRecoveryException[failed to recover from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: FileSystemException[/es_data1/nodes/0/indices/851HGssQQtGcuZRzUhkDpg/4/translog/translog-2593.ckp: Too many open files]; ]

I checked _nodes/stats/process and it says that I have 65536 "max_file_descriptors" and "open_file_descriptors" is only 8271. What am I missing here?

What version of Elasticsearch is this (please: always include this information)?

Oops, this is 6.2.2

I think that you've been bitten by an endless flush bug that will be fixed in 6.2.4:

Ug, any idea when that will be released?

I am really sorry but we do not provide release dates.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.