Different process-model for ES node

We have being using ES for a while, and it works great. But as the data
accumulating, we are getting "too many open files" again. I understand
this is actually something inherited from Lucene, and there is one
option to use "compound" mode (one file per lucene index).
Right now, each ES node is running as a single process. If we could fork
multiple processes, with each process only holds a subset of shards. It
should be possible to address "too many open files" in a scalable way.
I'll willing to get my hands dirty for implementation, but would need
some feedback about this idea.


You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.