when I am running elasticsearch along with around 10 logstash config files .
I am getting an error > Failed to accept a connection.
java.io.IOException: Too many open files
and elasticsearch stops running after that.
I have already increased the ulimit to 155500
But still getting the same error...
How to resolve this
1st guess: you have a lot of shards per node?
I am using client node for indexing the data and 1 master and 2 data node . I have around 15 indexes available each of them having 5 shards and 1 replica
Unless you really need 5 shards per index, I'd reduce this number ideally to 1.
That said, the number of shards you have is not that big. Did you check that ulimit has been applied correctly (using nodes info API)?
I have checked the ulimit and it is applied correctly..
(How to figure it out with node info api )
So when large number of conf file is running simultaneously, master is going down.
Is it something that each transaction in logstash to es treated as opening a file ?
What if I will run all the conf file and send the output to Kafka and then a centralize logstash will get the data from Kafka and will send to es ? Would it be a better solution ?
when I run all the conf file and send the output to Kafka and then a centralize logstash will get the data from Kafka and will send to es . It is running fine .