I've been researching this but was hoping to understand more of the underlying reasons. The official documentation recommends increasing the open file descriptor limit to 32k or 64k. In our cluster we currently have 12,800 shards on 5 data nodes. This leads to approximately 40k file descriptors on each node. The data nodes are CentOS with 64GB of memory.
Are there reasons we don't want to have more than 64k file descriptors open on a node? Is the concern about memory needed for each descriptor or are there other resource issues?
Is the 32k or 64k limit still applicable for nodes with higher amounts of memory?
Thanks,
Philip